Commits

rbeezer committed be31a79

Line breaks, Chapter EE

Comments (0)

Files changed (3)

src/section-EE.xml

 
 <p>Put it all together and
 <alignmath>
-<![CDATA[\zerovector&=a_0\vect{x}+a_1A\vect{x}+a_2A^2\vect{x}+a_3A^3\vect{x}+\cdots+a_nA^n\vect{x}\\]]>
-<![CDATA[&=a_0\vect{x}+a_1A\vect{x}+a_2A^2\vect{x}+a_3A^3\vect{x}+\cdots+a_mA^m\vect{x}&&\text{$a_i=0$ for $i>m$}\\]]>
-<![CDATA[&=\left(a_0I_n+a_1A+a_2A^2+a_3A^3+\cdots+a_mA^m\right)\vect{x}&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
+<![CDATA[\zerovector&=a_0\vect{x}+a_1A\vect{x}+a_2A^2\vect{x}+\cdots+a_nA^n\vect{x}\\]]>
+<![CDATA[&=a_0\vect{x}+a_1A\vect{x}+a_2A^2\vect{x}+\cdots+a_mA^m\vect{x}&&\text{$a_i=0$ for $i>m$}\\]]>
+<![CDATA[&=\left(a_0I_n+a_1A+a_2A^2+\cdots+a_mA^m\right)\vect{x}&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
 <![CDATA[&=p(A)\vect{x}&&\text{Definition of $p(x)$}\\]]>
-<![CDATA[&=(A-b_mI_n)(A-b_{m-1}I_n)\cdots(A-b_3I_n)(A-b_2I_n)(A-b_1I_n)\vect{x}]]>
+<![CDATA[&=(A-b_mI_n)(A-b_{m-1}I_n)\cdots(A-b_2I_n)(A-b_1I_n)\vect{x}]]>
 </alignmath>
 </p>
 
 <p>Let $k$ be the smallest integer such that
 <equation>
-(A-b_kI_n)(A-b_{k-1}I_n)\cdots(A-b_3I_n)(A-b_2I_n)(A-b_1I_n)\vect{x}=\zerovector.
+(A-b_kI_n)(A-b_{k-1}I_n)\cdots(A-b_2I_n)(A-b_1I_n)\vect{x}=\zerovector.
 </equation>
 </p>
 
 <p>From the preceding equation, we know that $k\leq m$.  Define the vector $\vect{z}$ by
 <equation>
-\vect{z}=(A-b_{k-1}I_n)\cdots(A-b_3I_n)(A-b_2I_n)(A-b_1I_n)\vect{x}
+\vect{z}=(A-b_{k-1}I_n)\cdots(A-b_2I_n)(A-b_1I_n)\vect{x}
 </equation>
 </p>
 
 </notation>
 </definition>
 
-<p>Since every eigenvalue must have at least one eigenvector, the associated eigenspace cannot be trivial, and so $\geomult{A}{\lambda}\geq 1$.</p>
+<p>Every eigenvalue must have at least one eigenvector, so the associated eigenspace cannot be trivial, and so $\geomult{A}{\lambda}\geq 1$.</p>
 
 <example acro="EMMS4" index="eigenvalue!multiplicities">
 <title>Eigenvalue multiplicities, matrix of size 4</title>
 
 <p>Computing eigenvectors,
 <alignmath>
-<![CDATA[\lambda&=2&E-2I_5&=]]>
+<![CDATA[\lambda&=2\\]]>
+<![CDATA[E-2I_5&=]]>
 \begin{bmatrix}
 <![CDATA[27 & 14 & 2 & 6 & -9\\]]>
 <![CDATA[-47 & -24 & -1 & -11 & 13\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{E}{2}&=\nsp{E-2I_5}]]>
+<![CDATA[\eigenspace{E}{2}&=\nsp{E-2I_5}]]>
 =\spn{\set{\colvector{-1\\\frac{3}{2}\\0\\1\\0},\,\colvector{0\\\frac{1}{2}\\1\\0\\1}}}
 =\spn{\set{\colvector{-2\\3\\0\\2\\0},\,\colvector{0\\1\\2\\0\\2}}}\\
-<![CDATA[\lambda&=-1&E+1I_5&=]]>
+<![CDATA[\lambda&=-1\\]]>
+<![CDATA[E+1I_5&=]]>
 \begin{bmatrix}
 <![CDATA[30 & 14 & 2 & 6 & -9\\]]>
 <![CDATA[-47 & -21 & -1 & -11 & 13\\]]>
 <![CDATA[0 & 0 & 0 & 0 & \leading{1}\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{E}{-1}&=\nsp{E+1I_5}=\spn{\set{\colvector{-2\\4\\-1\\1\\0}}}\\]]>
+<![CDATA[\eigenspace{E}{-1}&=\nsp{E+1I_5}=\spn{\set{\colvector{-2\\4\\-1\\1\\0}}}\\]]>
 </alignmath>
 </p>
 
 </alignmath>
 So the eigenvalues are $\lambda=2,\,-1,2+i,\,2-i$ with algebraic multiplicities $\algmult{F}{2}=1$, $\algmult{F}{-1}=1$, $\algmult{F}{2+i}=2$ and $\algmult{F}{2-i}=2$.</p>
 
-<p>Computing eigenvectors,
+<p>We compute eigenvectors, noting that the last two basis vectors are each a scalar multiple of what <acroref type="theorem" acro="BNS" /> will provide,
 <alignmath>
-<![CDATA[\lambda&=2\\]]>
-<![CDATA[F-2I_6&=]]>
+<![CDATA[\lambda&=2\quad\quad F-2I_6=\\]]>
+<![CDATA[&]]>
 \begin{bmatrix}
 <![CDATA[-61 & -34 & 41 & 12 & 25 & 30\\]]>
 <![CDATA[1 & 5 & -46 & -36 & -11 & -29\\]]>
 <![CDATA[0 & 0 & 0 & 0 & \leading{1} & \frac{4}{5}\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[\eigenspace{F}{2}&=\nsp{F-2I_6}]]>
+<![CDATA[&\eigenspace{F}{2}=\nsp{F-2I_6}]]>
 =\spn{\set{\colvector{-\frac{1}{5}\\0\\-\frac{3}{5}\\\frac{1}{5}\\-\frac{4}{5}\\1}}}
 =\spn{\set{\colvector{-1\\0\\-3\\1\\-4\\5}}}\\
 </alignmath>
 <alignmath>
-<![CDATA[\lambda&=-1\\]]>
-<![CDATA[F+1I_6&=]]>
+<![CDATA[\lambda&=-1\quad\quad F+1I_6=\\]]>
+<![CDATA[&]]>
 \begin{bmatrix}
 <![CDATA[-58 & -34 & 41 & 12 & 25 & 30\\]]>
 <![CDATA[1 & 8 & -46 & -36 & -11 & -29\\]]>
 <![CDATA[0 & 0 & 0 & 0 & \leading{1} & -\frac{1}{2}\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[\eigenspace{F}{-1}&=\nsp{F+I_6}]]>
+<![CDATA[&\eigenspace{F}{-1}=\nsp{F+I_6}]]>
 =\spn{\set{\colvector{-\frac{1}{2}\\\frac{3}{2}\\-\frac{1}{2}\\0\\\frac{1}{2}\\1}}}
 =\spn{\set{\colvector{-1\\3\\-1\\0\\1\\2}}}\\
 </alignmath>
 <alignmath>
 <![CDATA[\lambda&=2+i\\]]>
-<![CDATA[F-(2+i)I_6&=]]>
+<![CDATA[&F-(2+i)I_6=]]>
 \begin{bmatrix}
 <![CDATA[-61-i & -34 & 41 & 12 & 25 & 30\\]]>
 <![CDATA[1 & 5-i & -46 & -36 & -11 & -29\\]]>
 <![CDATA[0 & 0 & 0 & 0 & \leading{1} & 1\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[\eigenspace{F}{2+i}&=\nsp{F-(2+i)I_6}]]>
-=\spn{\set{\colvector{-\frac{1}{5}(7+i)\\\frac{1}{5}(9+2i)\\-1\\1\\-1\\1}}}
+<![CDATA[&\eigenspace{F}{2+i}=\nsp{F-(2+i)I_6}]]>
 =\spn{\set{\colvector{-7-i\\9+2i\\-5\\5\\-5\\5}}}\\
 </alignmath>
 <alignmath>
 <![CDATA[\lambda&=2-i\\]]>
-<![CDATA[F-(2-i)I_6&=]]>
+<![CDATA[&F-(2-i)I_6=]]>
 \begin{bmatrix}
 <![CDATA[-61+i & -34 & 41 & 12 & 25 & 30\\]]>
 <![CDATA[1 & 5+i & -46 & -36 & -11 & -29\\]]>
 <![CDATA[-91 & -48 & 32 & -5 & 30+i & 26\\]]>
 <![CDATA[209 & 107 & -55 & 28 & -69 & -52+i]]>
 \end{bmatrix}\\
-<![CDATA[&]]>
-\rref
+<![CDATA[&\rref]]>
 \begin{bmatrix}
 <![CDATA[\leading{1} & 0 & 0 & 0 & 0 & \frac{1}{5}(7-i)\\]]>
 <![CDATA[0 & \leading{1} & 0 & 0 & 0 & \frac{1}{5}(-9+2i)\\]]>
 <![CDATA[0 & 0 & 0 & 0 & \leading{1} & 1\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[\eigenspace{F}{2-i}&=\nsp{F-(2-i)I_6}]]>
-=\spn{\set{\colvector{\frac{1}{5}(-7+i)\\\frac{1}{5}(9-2i)\\-1\\1\\-1\\1}}}
+<![CDATA[&\eigenspace{F}{2-i}=\nsp{F-(2-i)I_6}]]>
 =\spn{\set{\colvector{-7+i\\9-2i\\-5\\5\\-5\\5}}}\\
 </alignmath>
 </p>
 
-<p>So the eigenspace dimensions yield geometric multiplicities $\geomult{F}{2}=1$, $\geomult{F}{-1}=1$, $\geomult{F}{2+i}=1$ and $\geomult{F}{2-i}=1$.  This example demonstrates some of the possibilities for the appearance of complex eigenvalues, even when all the entries of the matrix are real.  Notice how all the numbers in the analysis of $\lambda=2-i$ are conjugates of the corresponding number in the analysis of $\lambda=2+i$.  This is the content of the upcoming <acroref type="theorem" acro="ERMCP" />.</p>
+<p>Eigenspace dimensions yield geometric multiplicities of $\geomult{F}{2}=1$, $\geomult{F}{-1}=1$, $\geomult{F}{2+i}=1$ and $\geomult{F}{2-i}=1$.  This example demonstrates some of the possibilities for the appearance of complex eigenvalues, even when all the entries of the matrix are real.  Notice how all the numbers in the analysis of $\lambda=2-i$ are conjugates of the corresponding number in the analysis of $\lambda=2+i$.  This is the content of the upcoming <acroref type="theorem" acro="ERMCP" />.</p>
 
 </example>
 
 
 <p>Computing eigenvectors,
 <alignmath>
-<![CDATA[\lambda&=2&H-2I_5&=]]>
+<![CDATA[\lambda&=2\\&H-2I_5=]]>
 \begin{bmatrix}
 <![CDATA[13 & 18 & -8 & 6 & -5\\]]>
 <![CDATA[5 & 1 & 1 & -1 & -3\\]]>
 <![CDATA[0 & 0 & 0 & \leading{1} & 1\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{H}{2}&=\nsp{H-2I_5}]]>
+<![CDATA[&\eigenspace{H}{2}=\nsp{H-2I_5}]]>
 =\spn{\set{\colvector{1\\-1\\-2\\-1\\1}}}
 </alignmath>
 <alignmath>
-<![CDATA[\lambda&=1&H-1I_5&=]]>
+<![CDATA[\lambda&=1\\&H-1I_5=]]>
 \begin{bmatrix}
 <![CDATA[14 & 18 & -8 & 6 & -5\\]]>
 <![CDATA[5 & 2 & 1 & -1 & -3\\]]>
 <![CDATA[0 & 0 & 0 & \leading{1} & 1\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{H}{1}&=\nsp{H-1I_5}]]>
+<![CDATA[&\eigenspace{H}{1}=\nsp{H-1I_5}]]>
 =\spn{\set{\colvector{\frac{1}{2}\\0\\-\frac{1}{2}\\-1\\1}}}
 =\spn{\set{\colvector{1\\0\\-1\\-2\\2}}}
 </alignmath>
 <alignmath>
-<![CDATA[\lambda&=0&H-0I_5&=]]>
+<![CDATA[\lambda&=0\\&H-0I_5=]]>
 \begin{bmatrix}
 <![CDATA[15 & 18 & -8 & 6 & -5\\]]>
 <![CDATA[5 & 3 & 1 & -1 & -3\\]]>
 <![CDATA[0 & 0 & 0 & \leading{1} & 0\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{H}{0}&=\nsp{H-0I_5}]]>
+<![CDATA[&\eigenspace{H}{0}=\nsp{H-0I_5}]]>
 =\spn{\set{\colvector{-1\\2\\2\\0\\1}}}
 </alignmath>
 <alignmath>
-<![CDATA[\lambda&=-1&H+1I_5&=]]>
+<![CDATA[\lambda&=-1\\&H+1I_5=]]>
 \begin{bmatrix}
 <![CDATA[16 & 18 & -8 & 6 & -5\\]]>
 <![CDATA[5 & 4 & 1 & -1 & -3\\]]>
 <![CDATA[0 & 0 & 0 & \leading{1} & 1/2\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{H}{-1}&=\nsp{H+1I_5}]]>
+<![CDATA[&\eigenspace{H}{-1}=\nsp{H+1I_5}]]>
 =\spn{\set{\colvector{\frac{1}{2}\\0\\0\\-\frac{1}{2}\\1}}}
 =\spn{\set{\colvector{1\\0\\0\\-1\\2}}}
 </alignmath>
 <alignmath>
-<![CDATA[\lambda&=-3&H+3I_5&=]]>
+<![CDATA[\lambda&=-3\\&H+3I_5=]]>
 \begin{bmatrix}
 <![CDATA[18 & 18 & -8 & 6 & -5\\]]>
 <![CDATA[5 & 6 & 1 & -1 & -3\\]]>
 <![CDATA[0 & 0 & 0 & \leading{1} & 2\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 0]]>
 \end{bmatrix}\\
-<![CDATA[&&\eigenspace{H}{-3}&=\nsp{H+3I_5}]]>
+<![CDATA[&\eigenspace{H}{-3}=\nsp{H+3I_5}]]>
 =\spn{\set{\colvector{1\\-\frac{1}{2}\\-1\\-2\\1}}}
 =\spn{\set{\colvector{-2\\1\\2\\4\\-2}}}
 </alignmath>

src/section-PEE.xml

 <p>We will prove this result by contradiction (<acroref type="technique" acro="CD" />).  Suppose to the contrary that $S$ is a linearly dependent set.  Define $S_i=\set{\vectorlist{x}{i}}$ and let
 $k$ be an integer such that $S_{k-1}=\set{\vectorlist{x}{k-1}}$ is linearly independent and $S_k=\set{\vectorlist{x}{k}}$ is linearly dependent.  We have to ask if there is even such an integer $k$?  First, since eigenvectors are nonzero, the set $\set{\vect{x}_1}$ is linearly independent.  Since we are assuming that $S=S_p$ is linearly dependent, there must be an integer $k$, $2\leq k\leq p$, where the sets $S_i$ transition from linear independence to linear dependence (and stay that way). In other words, $\vect{x}_k$ is the vector with the smallest index that is a linear combination of just vectors with smaller indices.</p>
 
-<p>Since $\set{\vectorlist{x}{k}}$ is linearly dependent there are scalars, $\scalarlist{a}{k}$, some non-zero (<acroref type="definition" acro="LI" />), so that
+<p>Since $\set{\vectorlist{x}{k}}$ is a linearly dependent set there must be scalars, $\scalarlist{a}{k}$, not all zero (<acroref type="definition" acro="LI" />), so that
 <alignmath>
 \zerovector=\lincombo{a}{x}{k}
 </alignmath>
 <![CDATA[&=\left(A-\lambda_kI_n\right)\left(\lincombo{a}{x}{k}\right)]]>
 <![CDATA[&&]]>\text{<acroref type="definition" acro="RLD" />}\\
 <![CDATA[&=\left(A-\lambda_kI_n\right)a_1\vect{x}_1+]]>
-\left(A-\lambda_kI_n\right)a_2\vect{x}_2+
 \cdots+
 \left(A-\lambda_kI_n\right)a_k\vect{x}_k
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
 <![CDATA[&=a_1\left(A-\lambda_kI_n\right)\vect{x}_1+]]>
-a_2\left(A-\lambda_kI_n\right)\vect{x}_2+
 \cdots+
 a_k\left(A-\lambda_kI_n\right)\vect{x}_k
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="MMSMM" />}\\
 <![CDATA[&=a_1\left(A\vect{x}_1-\lambda_kI_n\vect{x}_1\right)+]]>
-a_2\left(A\vect{x}_2-\lambda_kI_n\vect{x}_2\right)+
 \cdots+
 a_k\left(A\vect{x}_k-\lambda_kI_n\vect{x}_k\right)
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
 <![CDATA[&=a_1\left(A\vect{x}_1-\lambda_k\vect{x}_1\right)+]]>
-a_2\left(A\vect{x}_2-\lambda_k\vect{x}_2\right)+
 \cdots+
 a_k\left(A\vect{x}_k-\lambda_k\vect{x}_k\right)
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="MMIM" />}\\
 <![CDATA[&=a_1\left(\lambda_1\vect{x}_1-\lambda_k\vect{x}_1\right)+]]>
-a_2\left(\lambda_2\vect{x}_2-\lambda_k\vect{x}_2\right)+
 \cdots+
 a_k\left(\lambda_k\vect{x}_k-\lambda_k\vect{x}_k\right)
 <![CDATA[&&]]>\text{<acroref type="definition" acro="EEM" />}\\
 <![CDATA[&=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+]]>
-a_2\left(\lambda_2-\lambda_k\right)\vect{x}_2+
 \cdots+
 a_k\left(\lambda_k-\lambda_k\right)\vect{x}_k
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
 <![CDATA[&=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+]]>
-a_2\left(\lambda_2-\lambda_k\right)\vect{x}_2+
 \cdots+
+a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}+
 a_k\left(0\right)\vect{x}_k
 <![CDATA[&&]]>\text{<acroref type="property" acro="AICN" />}\\
 <![CDATA[&=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+]]>
-a_2\left(\lambda_2-\lambda_k\right)\vect{x}_2+
 \cdots+
 a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}+
 \zerovector
 <![CDATA[&&]]>\text{<acroref type="theorem" acro="ZSSM" />}\\
 <![CDATA[&=a_1\left(\lambda_1-\lambda_k\right)\vect{x}_1+]]>
-a_2\left(\lambda_2-\lambda_k\right)\vect{x}_2+
 \cdots+
 a_{k-1}\left(\lambda_{k-1}-\lambda_k\right)\vect{x}_{k-1}
 <![CDATA[&&]]>\text{<acroref type="property" acro="Z" />}
 </alignmath>
 </p>
 
-<p>This is a relation of linear dependence on the linearly independent set $\set{\vectorlist{x}{k-1}}$, so the scalars must all be zero.  That is, $a_i\left(\lambda_i-\lambda_k\right)=0$ for $1\leq i\leq k-1$.  However, we have the hypothesis that the eigenvalues are distinct, so $\lambda_i\neq\lambda_k$ for $1\leq i\leq k-1$.  Thus $a_i=0$ for $1\leq i\leq k-1$.</p>
+<p>This equation is a relation of linear dependence on the linearly independent set $\set{\vectorlist{x}{k-1}}$, so the scalars must all be zero.  That is, $a_i\left(\lambda_i-\lambda_k\right)=0$ for $1\leq i\leq k-1$.  However, we have the hypothesis that the eigenvalues are distinct, so $\lambda_i\neq\lambda_k$ for $1\leq i\leq k-1$.  Thus $a_i=0$ for $1\leq i\leq k-1$.</p>
 
 <p>This reduces the original relation of linear dependence on $\set{\vectorlist{x}{k}}$ to the simpler equation $a_k\vect{x}_k=\zerovector$.  By <acroref type="theorem" acro="SMEZV" /> we conclude that $a_k=0$ or $\vect{x}_k=\zerovector$.  Eigenvectors are never the zero vector (<acroref type="definition" acro="EEM" />), so $a_k=0$.  So all of the scalars $a_i$, $1\leq i\leq k$ are zero, contradicting their introduction as the scalars creating a nontrivial relation of linear dependence on the set $\set{\vectorlist{x}{k}}$.  With a contradiction in hand, we conclude that $S$ must be linearly independent.</p>
 
 <theorem acro="NEM" index="eigenvalues!number">
 <title>Number of Eigenvalues of a Matrix</title>
 <statement>
-<p>Suppose that $A$ is a square matrix of size $n$ with distinct eigenvalues $\scalarlist{\lambda}{k}$.  Then
+<p>Suppose that $\scalarlist{\lambda}{k}$ are the distinct eigenvalues of a square matrix $A$ of size $n$.  Then
 <equation>
 \sum_{i=1}^{k}\algmult{A}{\lambda_i}=n
 </equation>

src/section-SD.xml

 
 <p>Check that $S$ is nonsingular and then compute
 <alignmath>
-<![CDATA[A&=\similar{B}{S}\\]]>
+<![CDATA[&A=\similar{B}{S}\\]]>
 <![CDATA[&=]]>
 \begin{bmatrix}
 <![CDATA[10 & 1 & 0 & 2 & -5 \\]]>
 
 <p>Then consider,
 <alignmath>
-<![CDATA[[A\vect{y}_1|A\vect{y}_2|A\vect{y}_3|\ldots|A\vect{y}_n]&=A\matrixcolumns{y}{n}]]>
+<![CDATA[[A\vect{y}_1|A\vect{y}_2|A\vect{y}_3&|\ldots|A\vect{y}_n]\\]]>
+<![CDATA[&=A\matrixcolumns{y}{n}]]>
 <![CDATA[&&]]>\text{<acroref type="definition" acro="MM" />}\\
 <![CDATA[&=AT\\]]>
 <![CDATA[&=I_nAT]]>
 
 <p>We next show that $S$ is a linearly independent set.  So we will begin with a relation of linear dependence on $S$, using doubly-subscripted scalars and eigenvectors,
 <alignmath>
-<![CDATA[\zerovector&=]]>
-\left(a_{11}\vect{x}_{11}+a_{12}\vect{x}_{12}+\cdots+a_{1\geomult{A}{\lambda_1}}\vect{x}_{1\geomult{A}{\lambda_1}}\right)+
-\left(a_{21}\vect{x}_{21}+a_{22}\vect{x}_{22}+\cdots+a_{2\geomult{A}{\lambda_2}}\vect{x}_{2\geomult{A}{\lambda_2}}\right)\\
-<![CDATA[&\quad\quad+\cdots+]]>
-\left(a_{k1}\vect{x}_{k1}+a_{k2}\vect{x}_{k2}+\cdots+a_{k\geomult{A}{\lambda_k}}\vect{x}_{k\geomult{A}{\lambda_k}}\right)
+\zerovector=
+<![CDATA[&\left(a_{11}\vect{x}_{11}+a_{12}\vect{x}_{12}+\cdots+a_{1\geomult{A}{\lambda_1}}\vect{x}_{1\geomult{A}{\lambda_1}}\right)+\\]]>
+<![CDATA[&\left(a_{21}\vect{x}_{21}+a_{22}\vect{x}_{22}+\cdots+a_{2\geomult{A}{\lambda_2}}\vect{x}_{2\geomult{A}{\lambda_2}}\right)+\\]]>
+<![CDATA[&\left(a_{31}\vect{x}_{31}+a_{32}\vect{x}_{32}+\cdots+a_{3\geomult{A}{\lambda_3}}\vect{x}_{3\geomult{A}{\lambda_3}}\right)+\\]]>
+<![CDATA[&\quad\quad\vdots\\]]>
+<![CDATA[&\left(a_{k1}\vect{x}_{k1}+a_{k2}\vect{x}_{k2}+\cdots+a_{k\geomult{A}{\lambda_k}}\vect{x}_{k\geomult{A}{\lambda_k}}\right)]]>
 </alignmath>
 </p>
 
 </equation>
 and so is a $5\times 5$ matrix with 5 distinct eigenvalues.</p>
 
-<p>By <acroref type="theorem" acro="DED" /> we know $H$ must be diagonalizable.  But just for practice, we exhibit the diagonalization itself.  The matrix $S$ contains eigenvectors of $H$ as columns, one from each eigenspace, guaranteeing linear independent columns and thus the nonsingularity of $S$.  The diagonal matrix has the eigenvalues of $H$ in the same order that their respective eigenvectors appear as the columns of $S$.  Notice that we are using the versions of the eigenvectors from <acroref type="example" acro="DEMS5" /> that have integer entries.
+<p>By <acroref type="theorem" acro="DED" /> we know $H$ must be diagonalizable.  But just for practice, we exhibit a diagonalization.  The matrix $S$ contains eigenvectors of $H$ as columns, one from each eigenspace, guaranteeing linear independent columns and thus the nonsingularity of $S$.  Notice that we are using the versions of the eigenvectors from <acroref type="example" acro="DEMS5" /> that have integer entries.  The diagonal matrix has the eigenvalues of $H$ in the same order that their respective eigenvectors appear as the columns of $S$.  With these matrices, verify computationally that $\similar{H}{S}=D$.
 <alignmath>
-<![CDATA[&\similar{H}{S}\\]]>
-<![CDATA[&=]]>
-\inverse{
+<![CDATA[S&=]]>
 \begin{bmatrix}
 <![CDATA[2 & 1 & -1 & 1 & 1\\]]>
 <![CDATA[-1 & 0 & 2 & 0 & -1\\]]>
 <![CDATA[-4 & -1 & 0 & -2 & -1\\]]>
 <![CDATA[2 & 2 & 1 & 2 & 1]]>
 \end{bmatrix}
-}
-\begin{bmatrix}
-<![CDATA[15 & 18 & -8 & 6 & -5\\]]>
-<![CDATA[5 & 3 & 1 & -1 & -3\\]]>
-<![CDATA[0 & -4 & 5 & -4 & -2\\]]>
-<![CDATA[-43 & -46 & 17 & -14 & 15\\]]>
-<![CDATA[26 & 30 & -12 & 8 & -10]]>
-\end{bmatrix}
-\begin{bmatrix}
-<![CDATA[2 & 1 & -1 & 1 & 1\\]]>
-<![CDATA[-1 & 0 & 2 & 0 & -1\\]]>
-<![CDATA[-2 & 0 & 2 & -1 & -2\\]]>
-<![CDATA[-4 & -1 & 0 & -2 & -1\\]]>
-<![CDATA[2 & 2 & 1 & 2 & 1]]>
-\end{bmatrix}\\
-<![CDATA[&=]]>
-\begin{bmatrix}
-<![CDATA[-3 & -3 & 1 & -1 & 1\\]]>
-<![CDATA[-1 & -2 & 1 & 0 & 1\\]]>
-<![CDATA[-5 & -4 & 1 & -1 & 2\\]]>
-<![CDATA[10 & 10 & -3 & 2 & -4\\]]>
-<![CDATA[-7 & -6 & 1 & -1 & 3]]>
-\end{bmatrix}
-\begin{bmatrix}
-<![CDATA[15 & 18 & -8 & 6 & -5\\]]>
-<![CDATA[5 & 3 & 1 & -1 & -3\\]]>
-<![CDATA[0 & -4 & 5 & -4 & -2\\]]>
-<![CDATA[-43 & -46 & 17 & -14 & 15\\]]>
-<![CDATA[26 & 30 & -12 & 8 & -10]]>
-\end{bmatrix}
-\begin{bmatrix}
-<![CDATA[2 & 1 & -1 & 1 & 1\\]]>
-<![CDATA[-1 & 0 & 2 & 0 & -1\\]]>
-<![CDATA[-2 & 0 & 2 & -1 & -2\\]]>
-<![CDATA[-4 & -1 & 0 & -2 & -1\\]]>
-<![CDATA[2 & 2 & 1 & 2 & 1]]>
-\end{bmatrix}\\
-<![CDATA[&=]]>
+<![CDATA[&D&=]]>
 \begin{bmatrix}
 <![CDATA[-3 & 0 & 0 & 0 & 0\\]]>
 <![CDATA[0 & -1 & 0 & 0 & 0\\]]>
 <![CDATA[0 & 0 & 0 & 0 & 2]]>
 \end{bmatrix}
 </alignmath>
+Note that there are many different ways to diagonalize $H$.  We could replace eigenvectors by nonzero scalar multiples, or we could rearrange the order of the eigenvectors as the columns of $S$ (which would subsequently reorder the eigenvalues along the diagonal of $D$).
 </p>
 
 </example>
 </equation>
 we find
 <alignmath>
-D=\similar{A}{S}
+<![CDATA[D&=\similar{A}{S}\\]]>
 <![CDATA[&=]]>
 \begin{bmatrix}
 <![CDATA[ -6 & 1 & -3 & -6 \\]]>