# Commits

committed 5e80f99

Line breaks, Chapters VS, D

# src/section-B.xml

 <theorem acro="SUVB" index="unit vectors!basis">
 <title>Standard Unit Vectors are a Basis</title>
 <statement>
-<p>The set of standard unit vectors for $\complex{m}$ (<acroref type="definition" acro="SUV" />), $B=\set{\vectorlist{e}{m}}=\setparts{\vect{e}_i}{1\leq i\leq m}$ is a basis for the vector space $\complex{m}$.</p>
-
+<p>The set of standard unit vectors for $\complex{m}$ (<acroref type="definition" acro="SUV" />),
+$B=\setparts{\vect{e}_i}{1\leq i\leq m}$
+is a basis for the vector space $\complex{m}$.</p>
 </statement>

 <proof>
 </equation>
 is a spanning set for $W=\setparts{p(x)}{p\in P_4,\ p(2)=0}$.  We will now show that $S$ is also linearly independent in $W$.  Begin with a relation of linear dependence,
 <alignmath>
-0+0x+0x^2+0x^3+0x^4
-<![CDATA[&=\alpha_1\left(x-2\right)+\alpha_2\left(x^2-4x+4\right)\\]]>
-<![CDATA[&\quad +\alpha_3\left(x^3-6x^2+12x-8\right)+\alpha_4\left(x^4-8x^3+24x^2-32x+16\right)\\]]>
+<![CDATA[0+0x&+0x^2+0x^3+0x^4\\]]>
+<![CDATA[&=\alpha_1\left(x-2\right)+\alpha_2\left(x^2-4x+4\right)+\alpha_3\left(x^3-6x^2+12x-8\right)\\]]>
+<![CDATA[&\quad\quad +\alpha_4\left(x^4-8x^3+24x^2-32x+16\right)\\]]>
 <![CDATA[&=\alpha_4x^4+]]>
 \left(\alpha_3-8\alpha_4\right)x^3+
 \left(\alpha_2-6\alpha_3+24\alpha_4\right)x^2\\
-<![CDATA[&\quad +]]>
+<![CDATA[&\quad\quad +]]>
 \left(\alpha_1-4\alpha_2+12\alpha_3-32\alpha_4\right)x+
 \left(-2\alpha_1+4\alpha_2-8\alpha_3+16\alpha_4\right)
 </alignmath>

 <p>Now, to illustrate <acroref type="theorem" acro="COB" />, choose any vector from $\complex{4}$, say $\vect{w}=\colvector{2\\-3\\1\\4}$, and compute
 <alignmath>
-<![CDATA[\innerproduct{\vect{w}}{\vect{v}_1}=\frac{-5i}{\sqrt{6}},&&]]>
-<![CDATA[\innerproduct{\vect{w}}{\vect{v}_2}=\frac{-19+30i}{\sqrt{174}},&&]]>
-<![CDATA[\innerproduct{\vect{w}}{\vect{v}_3}=\frac{120-211i}{\sqrt{3451}},&&]]>
-\innerproduct{\vect{w}}{\vect{v}_4}=\frac{6+12i}{\sqrt{119}}
+<![CDATA[\innerproduct{\vect{w}}{\vect{v}_1}&=\frac{-5i}{\sqrt{6}}&]]>
+<![CDATA[\innerproduct{\vect{w}}{\vect{v}_2}&=\frac{-19+30i}{\sqrt{174}}\\]]>
+<![CDATA[\innerproduct{\vect{w}}{\vect{v}_3}&=\frac{120-211i}{\sqrt{3451}}&]]>
+<![CDATA[\innerproduct{\vect{w}}{\vect{v}_4}&=\frac{6+12i}{\sqrt{119}}]]>
 </alignmath>
 </p>

 </exercise>

 <exercise type="C" number="13" rough="Linear dependence in a set of two vectors">
-<problem contributor="chrisblack">Find a basis for the subspace $Q$ of $P_2$, defined by $Q = \setparts{p(x) = a + bx + cx^2}{p(0) = 0}$.
+<problem contributor="chrisblack">Find a basis for the subspace $Q$ of $P_2$, $Q = \setparts{p(x) = a + bx + cx^2}{p(0) = 0}$.
 </problem>
 <solution contributor="chrisblack">If $p(0) = 0$, then $a + b(0) + c(0^2) = 0$, so $a = 0$.
 Thus, we can write $Q = \setparts{p(x) = bx + cx^2}{b, c\in\complexes}$.
 </exercise>

 <exercise type="C" number="14" rough="Linear dependence in a set of two vectors">
-<problem contributor="chrisblack">Find a basis for the subspace $R$ of $P_2$ defined by $R = \setparts{p(x) = a + bx + cx^2}{p'(0) = 0}$, where $p'$ denotes the derivative.
+<problem contributor="chrisblack">Find a basis for the subspace $R$ of $P_2$, $R = \setparts{p(x) = a + bx + cx^2}{p'(0) = 0}$, where $p'$ denotes the derivative.
 </problem>
 <solution contributor="chrisblack">The derivative of $p(x) = a + bx + cx^2$ is $p^\prime(x) = b + 2cx$.
 Thus, if $p \in R$, then $p^\prime(0) = b + 2c(0) = 0$,

# src/section-D.xml

 <proof>
 <p>We want to prove that any set of $t+1$ or more vectors from $V$ is linearly dependent.  So we will begin with a totally arbitrary set of vectors from $V$, $R=\set{\vectorlist{u}{m}}$, where $m>t$.  We will now construct a nontrivial relation of linear dependence on $R$.</p>

-<p>Each vector $\vectorlist{u}{m}$ can be written as a linear combination of $\vectorlist{v}{t}$ since $S$ is a spanning set of $V$.  This means there exist scalars  $a_{ij}$, $1\leq i\leq t$, $1\leq j\leq m$, so that
+<p>Each vector $\vectorlist{u}{m}$ can be written as a linear combination of the vectors $\vectorlist{v}{t}$ since $S$ is a spanning set of $V$.  This means there exist scalars  $a_{ij}$, $1\leq i\leq t$, $1\leq j\leq m$, so that
 <alignmath>
 <![CDATA[\vect{u}_1&=a_{11}\vect{v}_1+a_{21}\vect{v}_2+a_{31}\vect{v}_3+\cdots+a_{t1}\vect{v}_t\\]]>
 <![CDATA[\vect{u}_2&=a_{12}\vect{v}_1+a_{22}\vect{v}_2+a_{32}\vect{v}_3+\cdots+a_{t2}\vect{v}_t\\]]>
 <exercisesubsection>

 <exercise type="C" number="20" rough="rank and nullity for archetype matrices">
-<problem contributor="robertbeezer">The archetypes listed below are matrices, or systems of equations with coefficient matrices.  For each, compute the nullity and rank of the matrix.  This information is listed for each archetype (along with the number of columns in the matrix, so as to illustrate <acroref type="theorem" acro="RPNC" />), and notice how it could have been computed immediately after the determination of the sets $D$ and $F$ associated with the reduced row-echelon form of the matrix.<br /><br />
+<problem contributor="robertbeezer">The archetypes listed below are matrices, or systems of equations with coefficient matrices.  For each, compute the nullity and rank of the matrix.  This information is listed for each archetype (along with the number of columns in the matrix, so as to illustrate <acroref type="theorem" acro="RPNC" />), and notice how it could have been computed immediately after the determination of the sets $D$ and $F$ associated with the reduced row-echelon form of the matrix.<br />
 <acroref type="archetype" acro="A" />,
 <acroref type="archetype" acro="B" />,
 <acroref type="archetype" acro="C" />,

# src/section-DM.xml

 <theorem acro="DER" index="determinant!expansion, rows">
 <title>Determinant Expansion about Rows</title>
 <statement>
-<p>Suppose that $A$ is a square matrix of size $n$.  Then
+<p>Suppose that $A$ is a square matrix of size $n$.  Then for $1\leq i\leq n$
 <alignmath>
 <![CDATA[\detname{A}&=]]>
 (-1)^{i+1}\matrixentry{A}{i1}\detname{\submatrix{A}{i}{1}}+
 <![CDATA[&\quad+(-1)^{i+3}\matrixentry{A}{i3}\detname{\submatrix{A}{i}{3}}+]]>
 \cdots+
 (-1)^{i+n}\matrixentry{A}{in}\detname{\submatrix{A}{i}{n}}
-<![CDATA[&&]]>
-1\leq i\leq n
 </alignmath>
 which is known as <define>expansion</define> about row $i$.</p>


 <p>Now,
 <alignmath>
-\detname{A}
-<![CDATA[&=]]>
+<![CDATA[&\detname{A}\\]]>
+<![CDATA[&\quad=]]>
 \sum_{j=1}^{n}(-1)^{1+j}\matrixentry{A}{1j}\detname{\submatrix{A}{1}{j}}
 <![CDATA[&&]]>\text{<acroref type="definition" acro="DM" />}\\
-<![CDATA[&=]]>
+<![CDATA[&\quad=]]>
 \sum_{j=1}^{n}(-1)^{1+j}\matrixentry{A}{1j}
 \sum_{\substack{1\leq\ell\leq n\\\ell\neq j}}
 (-1)^{i-1+\ell-\epsilon_{\ell j}}\matrixentry{A}{i\ell}\detname{\submatrix{A}{1,i}{j,\ell}}
-<![CDATA[&&\text{Induction Hypothesis}\\]]>
-<![CDATA[&=]]>
+<![CDATA[&&\text{Induction}\\]]>
+<![CDATA[&\quad=]]>
 \sum_{j=1}^{n}\sum_{\substack{1\leq\ell\leq n\\\ell\neq j}}
 (-1)^{j+i+\ell-\epsilon_{\ell j}}
 \matrixentry{A}{1j}\matrixentry{A}{i\ell}\detname{\submatrix{A}{1,i}{j,\ell}}
 <![CDATA[&&]]>\text{<acroref type="property" acro="DCN" />}\\
-<![CDATA[&=]]>
+<![CDATA[&\quad=]]>
 \sum_{\ell=1}^{n}\sum_{\substack{1\leq j\leq n\\j\neq\ell}}
 (-1)^{j+i+\ell-\epsilon_{\ell j}}
 \matrixentry{A}{1j}\matrixentry{A}{i\ell}\detname{\submatrix{A}{1,i}{j,\ell}}
 <![CDATA[&&]]>\text{<acroref type="property" acro="CACN" />}\\
-<![CDATA[&=]]>
+<![CDATA[&\quad=]]>
 \sum_{\ell=1}^{n}(-1)^{i+\ell}\matrixentry{A}{i\ell}
 \sum_{\substack{1\leq j\leq n\\j\neq\ell}}
 (-1)^{j-\epsilon_{\ell j}}
 \matrixentry{A}{1j}\detname{\submatrix{A}{1,i}{j,\ell}}
 <![CDATA[&&]]>\text{<acroref type="property" acro="DCN" />}\\
-<![CDATA[&=]]>
+<![CDATA[&\quad=]]>
 \sum_{\ell=1}^{n}(-1)^{i+\ell}\matrixentry{A}{i\ell}
 \sum_{\substack{1\leq j\leq n\\j\neq\ell}}
 (-1)^{\epsilon_{\ell j}+j}
 \matrixentry{A}{1j}\detname{\submatrix{A}{i,1}{\ell,j}}
 <![CDATA[&&\text{$2\epsilon_{\ell j}$ is even}\\]]>
-<![CDATA[&=]]>
+<![CDATA[&\quad=]]>
 \sum_{\ell=1}^{n}(-1)^{i+\ell}\matrixentry{A}{i\ell}\detname{\submatrix{A}{i}{\ell}}
 <![CDATA[&&]]>\text{<acroref type="definition" acro="DM" />}
 </alignmath>
 <theorem acro="DEC" index="determinant!expansion, columns">
 <title>Determinant Expansion about Columns</title>
 <statement>
-<p>Suppose that $A$ is a square matrix of size $n$.  Then
+<p>Suppose that $A$ is a square matrix of size $n$.  Then for $1\leq j\leq n$
 <alignmath>
 <![CDATA[\detname{A}&=]]>
 (-1)^{1+j}\matrixentry{A}{1j}\detname{\submatrix{A}{1}{j}}+
 <![CDATA[&\quad+(-1)^{3+j}\matrixentry{A}{3j}\detname{\submatrix{A}{3}{j}}+]]>
 \cdots+
 (-1)^{n+j}\matrixentry{A}{nj}\detname{\submatrix{A}{n}{j}}
-<![CDATA[&&]]>
-1\leq j\leq n
 </alignmath>
 which is known as <define>expansion</define> about column $j$.</p>


# src/section-LISS.xml

 <example acro="LIP4" index="linearly independent!polynomials">
 <title>Linear independence in $P_4$</title>

-<p>In the vector space of polynomials with degree 4 or less, $P_4$ (<acroref type="example" acro="VSP" />) consider the set
-<equation>
-S=\set{
+<p>In the vector space of polynomials with degree 4 or less, $P_4$ (<acroref type="example" acro="VSP" />) consider the set $S$ below
+<alignmath>
+\set{
 2x^4+3x^3+2x^2-x+10,\,
 -x^4-2x^3+x^2+5x-8,\,
 2x^4+x^3+10x^2+17x-2
-}.
-</equation>
+}
+</alignmath>
 </p>

 <p>Is this set of vectors linearly independent or dependent?  Consider that

 <p>Using our definitions of vector addition and scalar multiplication in $P_4$ (<acroref type="example" acro="VSP" />), we arrive at,
 <alignmath>
-<![CDATA[0x^4+0x^3+0x^2+0x+0&=]]>
-\left(3\alpha_1-3\alpha_2+4\alpha_3+2\alpha_4\right)x^4+
-\left(-2\alpha_1+\alpha_2+5\alpha_3-7\alpha_4\right)x^3\\
-<![CDATA[&\quad +]]>
-\left(4\alpha_1+              -2\alpha_3+4\alpha_4\right)x^2+
-\left(6\alpha_1+4\alpha_2+3\alpha_3+2\alpha_4\right)x\\
-<![CDATA[&\quad +]]>
-\left(-\alpha_1+2\alpha_2+\alpha_3+\alpha_4\right).
+<![CDATA[&0x^4+0x^3+0x^2+0x+0=\\]]>
+<![CDATA[&\quad\left(3\alpha_1-3\alpha_2+4\alpha_3+2\alpha_4\right)x^4 + \left(-2\alpha_1+\alpha_2+5\alpha_3-7\alpha_4\right)x^3 +\ \\]]>
+<![CDATA[&\quad\left(4\alpha_1+              -2\alpha_3+4\alpha_4\right)x^2+\left(6\alpha_1+4\alpha_2+3\alpha_3+2\alpha_4\right)x + \left(-\alpha_1+2\alpha_2+\alpha_3+\alpha_4\right)]]>
 </alignmath>
 </p>

 </alignmath>
 and then massage it to a point where we can apply the definition of equality in $C$.  Recall the definitions of vector addition and scalar multiplication in $C$ are not what you would expect.
 <alignmath>
-(-1,\,-1)
+<![CDATA[(&-1,\,-1)\\]]>
 <![CDATA[&=\zerovector]]>
 <![CDATA[&&]]>\text{<acroref type="example" acro="CVS" />}\\
 <![CDATA[&=a_1(1,\,0) + a_2(6,\,3)]]>
 </p>

 <p>Any solution to this system of equations will provide the linear combination we need to determine if $r\in\spn{S}$, but we need to be convinced there is a solution for any values of $a,\,b,\,c,\,d,\,e$ that qualify $r$ to be a member of $W$.  So the question is:  is this system of equations consistent?  We will form the augmented matrix, and row-reduce. (We probably need to do this by hand, since the matrix is symbolic <mdash /> reversing the order of the first four rows is the best way to start).  We obtain a matrix in reduced row-echelon form
-<equation>
-\begin{bmatrix}
+<alignmath>
+<![CDATA[&\begin{bmatrix}]]>
 <![CDATA[\leading{1}&0&0&0&32a+12b+4c+d\\]]>
 <![CDATA[0&\leading{1}&0&0&24a+6b+c\\]]>
 <![CDATA[0&0&\leading{1}&0&8a+b\\]]>
 <![CDATA[0&0&0&\leading{1}&a\\]]>
 <![CDATA[0&0&0&0&16a+8b+4c+2d+e]]>
-\end{bmatrix}
-=
+\end{bmatrix}\\
+<![CDATA[=&]]>
 \begin{bmatrix}
 <![CDATA[\leading{1}&0&0&0&32a+12b+4c+d\\]]>
 <![CDATA[0&\leading{1}&0&0&24a+6b+c\\]]>
 <![CDATA[0&0&0&\leading{1}&a\\]]>
 <![CDATA[0&0&0&0&0]]>
 \end{bmatrix}
-</equation>
+</alignmath>
 </p>

 <p>For your results to match our first matrix, you may find it necessary to multiply the final row of your row-reduced matrix by the appropriate scalar, and/or add multiples of this row to some of the other rows.  To obtain the second version of the matrix, the last entry of the last column has been simplified to zero according to the one condition we were able to impose on an arbitrary polynomial from $W$.    So with no leading 1's in the last column, <acroref type="theorem" acro="RCLS" /> tells us this system is consistent.  Therefore, <em>any</em> polynomial from $W$ can be written as a linear combination of the polynomials in $S$, so $W\subseteq\spn{S}$. Therefore,  $W=\spn{S}$ and $S$ is a spanning set for $W$ by <acroref type="definition" acro="SSVS" />.</p>
 </equation>
 </p>

-<p>We will act as if this equation is true and try to determine just what $a_1$ and $a_2$ would be (as functions of $x$ and $y$).
+<p>We will act as if this equation is true and try to determine just what $a_1$ and $a_2$ would be (as functions of $x$ and $y$).  Recall that our vector space operations are unconventional and are defined in <acroref type="example" acro="CVS" />.
 <alignmath>
 <![CDATA[(x,\,y)&=a_1(1,\,0) + a_2(6,\,3)\\]]>
-<![CDATA[&= (1a_1+a_1-1,\,0a_1+a_1-1) + (6a_2+a_2-1,\,3a_2+a_2-1)]]>
-<![CDATA[&&\text{Scalar mult in $C$}\\]]>
+<![CDATA[&= (1a_1+a_1-1,\,0a_1+a_1-1) + (6a_2+a_2-1,\,3a_2+a_2-1)\\]]>
 <![CDATA[&= (2a_1-1,\,a_1-1) + (7a_2-1,\,4a_2-1)\\]]>
-<![CDATA[&= (2a_1-1+7a_2-1+1,\,a_1-1+4a_2-1+1)]]>
-<![CDATA[&&\text{Addition in $C$}\\]]>
+<![CDATA[&= (2a_1-1+7a_2-1+1,\,a_1-1+4a_2-1+1)\\]]>
 <![CDATA[&= (2a_1+7a_2-1,\,a_1+4a_2-1)]]>
 </alignmath>
 </p>

 <p>We could chase through the above implications backwards and take the existence of these solutions as sufficient evidence for $R$ being a spanning set for $C$.  Instead, let us view the above as simply scratchwork and now get serious with a simple direct proof that $R$ is a spanning set.  Ready?  Suppose $(x,\,y)$ is any vector from $C$, then compute the following linear combination using the definitions of the operations in $C$,
 <alignmath>
-<![CDATA[(4x-7y-3)(1,\,0)&+(-x+2y+1)(6,\,3)\\]]>
+<![CDATA[(4x&-7y-3)(1,\,0)+(-x+2y+1)(6,\,3)\\]]>
 <![CDATA[&=\left(1(4x-7y-3)+(4x-7y-3)-1,\,0(4x-7y-3)+(4x-7y-3)-1\right)+\\]]>
 <![CDATA[&\quad\left(6(-x+2y+1)+(-x+2y+1)-1,\,3(-x+2y+1)+(-x+2y+1)-1\right)\\]]>
 <![CDATA[&=(8x-14y-7,\,4x-7y-4)+(-7x+14y+6,\,-4x+8y+3)\\]]>
 <proof>
 <p>That $\vect{w}$ can be written as a linear combination of the vectors in $B$ follows from the spanning property of the set (<acroref type="definition" acro="SSVS" />).  This is good, but not the meat of this theorem.  We now know that for any choice of the vector $\vect{w}$ there exist <em>some</em> scalars that will create $\vect{w}$ as a linear combination of the basis vectors.  The real question is:  Is there <em>more</em> than one way to write $\vect{w}$ as a linear combination of $\{\vectorlist{v}{m}\}$?  Are the scalars $a_1,\,a_2,\,a_3,\,\ldots,\,a_m$ unique?  (<acroref type="technique" acro="U" />)</p>

-<p>Assume there are two ways to express $\vect{w}$ as a linear combination of $\{\vectorlist{v}{m}\}$.  In other words there exist scalars $a_1,\,a_2,\,a_3,\,\ldots,\,a_m$ and $b_1,\,b_2,\,b_3,\,\ldots,\,b_m$ so that
+<p>Assume there are two different linear combinations of $\{\vectorlist{v}{m}\}$ that equal the vector $\vect{w}$.  In other words there exist scalars $a_1,\,a_2,\,a_3,\,\ldots,\,a_m$ and $b_1,\,b_2,\,b_3,\,\ldots,\,b_m$ so that
 <alignmath>
 <![CDATA[\vect{w}&=\lincombo{a}{v}{m}\\]]>
 <![CDATA[\vect{w}&=\lincombo{b}{v}{m}.]]>

# src/section-PDM.xml


 <p>We will perform a sequence of row operations on this matrix, shooting for an upper triangular matrix, whose determinant will be simply the product of its diagonal entries.  For each row operation, we will track the effect on the determinant via <acroref type="theorem" acro="DRCS" />, <acroref type="theorem" acro="DRCM" />, <acroref type="theorem" acro="DRCMA" />.
 <alignmath>
+<![CDATA[A&=]]>
+\begin{bmatrix}
+<![CDATA[2 & 0 & 2 & 3 \\]]>
+<![CDATA[1 & 3 &-1 & 1 \\]]>
+<![CDATA[-1& 1 &-1 & 2 \\]]>
+<![CDATA[3 & 5 & 4 & 0]]>
+\end{bmatrix}
+<![CDATA[&&\detname{A}&&]]>\\
 \xrightarrow{\rowopswap{1}{2}}
 <![CDATA[A_1&=]]>
 \begin{bmatrix}
 <![CDATA[3 & 5 & 4 & 0]]>
 \end{bmatrix}
 <![CDATA[&]]>
-<![CDATA[\detname{A}&=-\detname{A_1}&&]]>\text{<acroref type="theorem" acro="DRCS" />}\\
+<![CDATA[&=-\detname{A_1}&&]]>\text{<acroref type="theorem" acro="DRCS" />}\\
 \xrightarrow{\rowopadd{-2}{1}{2}}
 <![CDATA[A_2&=]]>
 \begin{bmatrix}
 </proof>
 </theorem>

-<p>It is amazing that matrix multiplication and the determinant interact this way.  Might it also be true that $\detname{A+B}=\detname{A}+\detname{B}$?  (See <acroref type="exercise" acro="PDM.M30" />.)</p>
+<p>It is amazing that matrix multiplication and the determinant interact this way.  Might it also be true that $\detname{A+B}=\detname{A}+\detname{B}$?  (<acroref type="exercise" acro="PDM.M30" />)</p>

 <sageadvice acro="NME7" index="nonsingular matrices!round 7">
 <title>Nonsingular Matrices, Round 7</title>

# src/section-VS.xml


 <p>Set: $P_n$, the set of all polynomials of degree $n$ or less in the variable $x$ with coefficients from $\complex{\null}$.</p>
 <p>Equality:
-<equation>
-a_0+a_1x+a_2x^2+\cdots+a_nx^n=b_0+b_1x+b_2x^2+\cdots+b_nx^n
-\text{ if and only if }a_i=b_i\text{ for }0\leq i\leq n
-</equation></p>
+<alignmath>
+<![CDATA[a_0+a_1x+a_2x^2+\cdots+a_nx^n&=b_0+b_1x+b_2x^2+\cdots+b_nx^n\\]]>
+<![CDATA[\text{ if and only if }a_i&=b_i\text{ for }0\leq i\leq n]]>
+</alignmath></p>
 <p>Vector Addition:
 <alignmath>
 (a_0+a_1x+a_2x^2+\cdots+a_nx^n)+(b_0+b_1x+b_2x^2+\cdots+b_nx^n)=\\