Commits

rbeezer  committed 4e7024a

Minor edits, Sections MINM, CRS, FS

  • Participants
  • Parent commits ea959d3

Comments (0)

Files changed (5)

 ~~~~~~~~~~~~~~~~
 Edit: Minor edits in Sections WILA, SSLE, RREF, TSS, HSE, NM
 Edit: Minor edits in Section VO, LC, SS, LI, LDS, O
-Edit: Minor edits in Section MO, MM, MISLE
+Edit: Minor edits in Section MO, MM, MISLE, MINM, CRS, FS
 Edit: Extended slightly the conclusion of Theorem HSC
 Edit: Sage CNIP, removed obsolete discussion (Michael DuBois)
 Edit: Stronger finish to proof of Theorem TT (Anna Dovzhik)
 Edit: Section VS, added a semi-colon (Gavin Tranter)
 Edit: Theorem HMIP, more Theorem EMMVP motivation (Anna Dovzhik)
 Change: Theorem ISRN demoted to Exercise TSS.T11
-Change: Prefer "pivot columns" over "leading 1", Chapter SLE
+Change: Prefer "pivot columns" over "leading 1", Chapter SLE, V, M
 Change: Diagram DTSLS, use Theorem CSRN rather than FVCS (Chris Beaulaurier)
 Typo: C^n in statement of Theorem SLSLC
 Typo: Section LI, "scenees" (Jenna Fontaine)

File src/section-CRS.xml

 <definition acro="CSM" index="matrix!column space">
 <title>Column Space of a Matrix</title>
 <indexlocation index="column space!matrix" />
-<p>Suppose that $A$ is an $m\times n$ matrix with columns $\set{\vectorlist{A}{n}}$.  Then the <define>column space</define> of $A$, written $\csp{A}$, is the subset of $\complex{m}$ containing all linear combinations of the columns of $A$,
+<p>Suppose that $A$ is an $m\times n$ matrix with columns $\vectorlist{A}{n}$.  Then the <define>column space</define> of $A$, written $\csp{A}$, is the subset of $\complex{m}$ containing all linear combinations of the columns of $A$,
 <equation>
 \csp{A}=\spn{\set{\vectorlist{A}{n}}}
 </equation></p>
 
 <p>Some authors refer to the column space of a matrix as the <define>range</define>, but we will reserve this term for use with linear transformations (<acroref type="definition" acro="RLT" />).</p>
 
-<p>Upon encountering any new set, the first question we ask is what objects are in the set, and which objects are not?  Here's an example of one way to answer this question, and it will motivate a theorem that will then answer the question precisely.</p>
+<p>Upon encountering any new set, the first question we ask is what objects are in the set, and which objects are not?  Here is an example of one way to answer this question, and it will motivate a theorem that will then answer the question precisely.</p>
 
 <example acro="CSMCS" index="column space!consistent systems">
 <title>Column space of a matrix and consistent systems</title>
 
 <p><acroref type="archetype" acro="D" /> and <acroref type="archetype" acro="E" /> are linear systems of equations, with an identical $3\times 4$ coefficient matrix, which we call $A$ here.  However, <acroref type="archetype" acro="D" /> is consistent, while <acroref type="archetype" acro="E" /> is not.  We can explain this difference by employing the column space of the matrix $A$.</p>
 
-<p>The column vector of constants, $\vect{b}$, in <acroref type="archetype" acro="D" /> is
-<equation>
-\vect{b}=\colvector{8\\-12\\4}
-</equation>
-One solution to $\linearsystem{A}{\vect{b}}$, as listed, is
-<equation>
-\vect{x}=\colvector{7\\8\\1\\3}
-</equation>
+<p>The column vector of constants, $\vect{b}$, in <acroref type="archetype" acro="D" /> is given below, and one solution listed for $\linearsystem{A}{\vect{b}}$ is $\vect{x}$,
+
+<alignmath>
+\vect{b}&amp;=\colvector{8\\-12\\4}
+&amp;
+\vect{x}&amp;=\colvector{7\\8\\1\\3}
+</alignmath>
 </p>
 
 <p>By <acroref type="theorem" acro="SLSLC" />, we can summarize this solution as a linear combination of the columns of $A$ that equals $\vect{b}$,
 
 <p>Notice how this formulation of the column space looks very much like the definition of the null space of a matrix (<acroref type="definition" acro="NSM" />), but for a rectangular matrix the column vectors of $\csp{A}$ and $\nsp{A}$ have different sizes, so the sets are very different.</p>
 
-<p>Given a vector $\vect{b}$ and a matrix $A$ it is now very mechanical to test if $\vect{b}\in\csp{A}$.  Form the linear system $\linearsystem{A}{\vect{b}}$, row-reduce the augmented matrix, $\augmented{A}{\vect{b}}$, and test for consistency with <acroref type="theorem" acro="RCLS" />.  Here's an example of this procedure.</p>
+<p>Given a vector $\vect{b}$ and a matrix $A$ it is now very mechanical to test if $\vect{b}\in\csp{A}$.  Form the linear system $\linearsystem{A}{\vect{b}}$, row-reduce the augmented matrix, $\augmented{A}{\vect{b}}$, and test for consistency with <acroref type="theorem" acro="RCLS" />.  Here is an example of this procedure.</p>
 
 <example acro="MCSM" index="column space!testing membership">
 <title>Membership in the column space of a matrix</title>
 </equation>
 </p>
 
-<p>Without a leading 1 in the final column, <acroref type="theorem" acro="RCLS" /> tells us the system is consistent and therefore by <acroref type="theorem" acro="CSCS" />, $\vect{v}\in\csp{A}$.</p>
+<p>Since the final column is not a pivoot column, <acroref type="theorem" acro="RCLS" /> tells us the system is consistent and therefore by <acroref type="theorem" acro="CSCS" />, $\vect{v}\in\csp{A}$.</p>
 
 <p>If we wished to demonstrate explicitly that $\vect{v}$ is a linear combination of the columns of $A$, we can find a solution (any solution) of $\linearsystem{A}{\vect{v}}$ and use <acroref type="theorem" acro="SLSLC" /> to construct the desired linear combination.  For example, set the free variables to $x_3=2$ and $x_4=1$.  Then a solution has $x_2=1$ and $x_1=6$.  Then by <acroref type="theorem" acro="SLSLC" />,
 <equation>
 </equation>
 </p>
 
-<p>With a leading 1 in the final column, <acroref type="theorem" acro="RCLS" /> tells us the system is inconsistent and therefore by <acroref type="theorem" acro="CSCS" />, $\vect{w}\not\in\csp{A}$.</p>
+<p>since the final column is a pivot column, <acroref type="theorem" acro="RCLS" /> tells us the system is inconsistent and therefore by <acroref type="theorem" acro="CSCS" />, $\vect{w}\not\in\csp{A}$.</p>
 
 </example>
 
 <theorem acro="BCS" index="column space!basis">
 <title>Basis of the Column Space</title>
 <statement>
-<p>Suppose that $A$ is an $m\times n$ matrix with columns $\vectorlist{A}{n}$, and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows.  Let $D=\{d_1,\,d_2,\,d_3,\,\ldots,\,d_r\}$ be the set of column indices where $B$ has leading 1's.  Let
+<p>Suppose that $A$ is an $m\times n$ matrix with columns $\vectorlist{A}{n}$, and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows.  Let $D=\{d_1,\,d_2,\,d_3,\,\ldots,\,d_r\}$ be the set of indices for the pivot columns of $B$  Let
 $T=\set{\vect{A}_{d_1},\,\vect{A}_{d_2},\,\vect{A}_{d_3},\,\ldots,\,\vect{A}_{d_r}}$.  Then
 <ol><li> $T$ is a linearly independent set.
 </li><li> $\csp{A}=\spn{T}$.
 </proof>
 </theorem>
 
-<p>This is a nice result since it gives us a handful of vectors that describe the entire column space (through the span), and we believe this set is as small as possible because we cannot create any more relations of linear dependence to trim it down further.  Furthermore, we defined the column space (<acroref type="definition" acro="CSM" />) as all linear combinations of the columns of the matrix, and the elements of the set $T$ are still columns of the matrix (we won't be so lucky in the next two constructions of the column space).</p>
+<p>This is a nice result since it gives us a handful of vectors that describe the entire column space (through the span), and we believe this set is as small as possible because we cannot create any more relations of linear dependence to trim it down further.  Furthermore, we defined the column space (<acroref type="definition" acro="CSM" />) as all linear combinations of the columns of the matrix, and the elements of the set $T$ are still columns of the matrix (we will not be so lucky in the next two constructions of the column space).</p>
 
-<p>Procedurally this theorem is extremely easy to apply.  Row-reduce the original matrix, identify $r$ columns with leading 1's in this reduced matrix, and grab the corresponding columns of the original matrix.  But it is still important to study the proof of <acroref type="theorem" acro="BS" /> and its motivation in <acroref type="example" acro="COV" /> which lie at the root of this theorem.  We'll trot through an example all the same.</p>
+<p>Procedurally this theorem is extremely easy to apply.  Row-reduce the original matrix, identify $r$ pivot columns the reduced matrix, and grab the columns of the original matrix with the same indices as the pivot columns.  But it is still important to study the proof of <acroref type="theorem" acro="BS" /> and its motivation in <acroref type="example" acro="COV" /> which lie at the root of this theorem.  We will trot through an example all the same.</p>
 
 <example acro="CSOCD" index="column space!original columns,  Archetype D">
 <title>Column space, original columns, Archetype D</title>
 <indexlocation index="Archetype D!column space, original columns" />
 <p>Let's determine a compact expression for the entire column space of the coefficient matrix of the system of equations that is <acroref type="archetype" acro="D" />.  Notice that in <acroref type="example" acro="CSMCS" /> we were only determining if individual vectors were in the column space or not, now we are describing the entire column space.</p>
 
-<p>To start with the application of <acroref type="theorem" acro="BCS" />, call the coefficient matrix $A$
-<equation>
-A=<archetypepart acro="D" part="purematrix" /></equation>
-and row-reduce it to reduced row-echelon form,
-<equation>
-B=<archetypepart acro="D" part="matrixreduced" /></equation>
+<p>To start with the application of <acroref type="theorem" acro="BCS" />, call the coefficient matrix $A$ and row-reduce it to reduced row-echelon form $B$,
+<alignmath>
+A&amp;=<archetypepart acro="D" part="purematrix" />
+&amp;
+B&amp;=<archetypepart acro="D" part="matrixreduced" /></alignmath>
 </p>
 
-<p>There are leading 1's in columns 1 and 2, so $D=\{1,\,2\}$.  To construct a set that spans $\csp{A}$, just grab the columns of $A$ indicated by the set $D$, so
+<p>Since columns 1 and 2 are pivot columns, $D=\{1,\,2\}$.  To construct a set that spans $\csp{A}$, just grab the columns of $A$ with indices in $D$, so
 <equation>
 \csp{A}=\spn{\set{\colvector{2\\-3\\1},\,\colvector{1\\4\\1}}}.
 </equation>
 <title>Column space of Archetype A</title>
 
 <indexlocation index="Archetype A!column space" />
-<p>The coefficient matrix in <acroref type="archetype" acro="A" /> is
-<equation>
-A=<archetypepart acro="A" part="purematrix" /></equation>
-which row-reduces to
-<equation>
-<archetypepart acro="A" part="matrixreduced" /></equation></p>
+<p>The coefficient matrix in <acroref type="archetype" acro="A" /> is $A$, which row-reduces to $B$,
 
-<p>Columns 1 and 2 have leading 1's, so by <acroref type="theorem" acro="BCS" /> we can write
+<alignmath>A&amp;=<archetypepart acro="A" part="purematrix" />
+&amp;
+B&amp;=<archetypepart acro="A" part="matrixreduced" /></alignmath></p>
+
+<p>Columns 1 and 2 are pivot columns, so by <acroref type="theorem" acro="BCS" /> we can write
 <equation>
 \csp{A}=\spn{\set{\vect{A}_1,\,\vect{A}_2}}=\spn{\set{\colvector{1\\2\\1},\,\colvector{-1\\1\\1}}}.
 </equation>
 </equation>
 </p>
 
-<p>Since there are leading 1's in columns with indices $D=\set{1,\,2,\,3}$, the column space of $\transpose{I}$ can be spanned by just the first three columns of $\transpose{I}$,
+<p>Since the pivot columns have indices $D=\set{1,\,2,\,3}$, the column space of $\transpose{I}$ can be spanned by just the first three columns of $\transpose{I}$,
 <equation>
 \rsp{I}=\csp{\transpose{I}}=
 \spn{\set{
 
 </example>
 
-<p><acroref type="theorem" acro="REMRS" /> is at its best when one of the row-equivalent matrices is in reduced row-echelon form.  The vectors that correspond to the zero rows can be ignored. (Who needs the zero vector when building a span? See <acroref type="exercise" acro="LI.T10" />.)  The echelon pattern insures that the nonzero rows yield vectors that are linearly independent.  Here's the theorem.</p>
+<p><acroref type="theorem" acro="REMRS" /> is at its best when one of the row-equivalent matrices is in reduced row-echelon form.  The vectors that are zero rows can be ignored. (Who needs the zero vector when building a span? See <acroref type="exercise" acro="LI.T10" />.)  The echelon pattern insures that the nonzero rows yield vectors that are linearly independent.  Here is the theorem.</p>
 
 <theorem acro="BRS" index="row space!basis">
 <title>Basis for the Row Space</title>
 </statement>
 
 <proof>
-<p>From <acroref type="theorem" acro="REMRS" /> we know that $\rsp{A}=\rsp{B}$.  If $B$ has any zero rows, these correspond to columns of $\transpose{B}$ that are the zero vector.  We can safely toss out the zero vector in the span construction, since it can be recreated from the nonzero vectors by a linear combination where all the scalars are zero.  So $\rsp{A}=\spn{S}$.</p>
+<p>From <acroref type="theorem" acro="REMRS" /> we know that $\rsp{A}=\rsp{B}$.  If $B$ has any zero rows, these are columns of $\transpose{B}$ that are the zero vector.  We can safely toss out the zero vector in the span construction, since it can be recreated from the nonzero vectors by a linear combination where all the scalars are zero.  So $\rsp{A}=\spn{S}$.</p>
 
-<p>Suppose $B$ has $r$ nonzero rows and let $D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}$ denote the column indices of $B$ that have a leading one in them.  Denote the $r$ column vectors of $\transpose{B}$, the vectors in $S$, as $\vectorlist{B}{r}$.  To show that $S$ is linearly independent, start with a relation of linear dependence
+<p>Suppose $B$ has $r$ nonzero rows and let $D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}$ denote the  indices of the pivot columns of $B$.  Denote the $r$ column vectors of $\transpose{B}$, the vectors in $S$, as $\vectorlist{B}{r}$.  To show that $S$ is linearly independent, start with a relation of linear dependence
 <equation>
 \lincombo{\alpha}{B}{r}=\zerovector
 </equation>
 </p>
 
-<p>Now consider this vector equality in location $d_i$.  Since $B$ is in reduced row-echelon form, the entries of column $d_i$ of $B$ are all zero, except for a (leading) 1 in row $i$.  Thus, in $\transpose{B}$, row $d_i$ is all zeros, excepting a 1 in column $i$.   So, for $1\leq i\leq r$,
+<p>Now consider this vector equality in location $d_i$.  Since $B$ is in reduced row-echelon form, the entries of column $d_i$ of $B$ are all zero, except for a leading 1 in row $i$.  Thus, in $\transpose{B}$, row $d_i$ is all zeros, excepting a 1 in column $i$.   So, for $1\leq i\leq r$,
 <alignmath>
 0
 <![CDATA[&=\vectorentry{\zerovector}{d_i}&&]]><acroref type="definition" acro="ZCV" />\\
 </equation>
 </p>
 
-<p>These three vectors provide a much-improved description of $X$.  There are fewer vectors, and the pattern of zeros and ones in the first three entries makes it easier to determine membership in $X$.  And all we had to do was row-reduce the right matrix and toss out a zero row.  Next to row operations themselves, <em>this is probably the most powerful computational technique at your disposal</em> as it quickly provides a much improved description of a span, any span.</p>
+<p>These three vectors provide a much-improved description of $X$.  There are fewer vectors, and the pattern of zeros and ones in the first three entries makes it easier to determine membership in $X$.  </p>
 
 </example>
 
+<p>Notice that in <acroref type="example" acro="IAS" /> all we had to do was row-reduce the right matrix and toss out a zero row.  Next to row operations themselves, <acroref type="theorem" acro="BRS" /> <em>is probably the most powerful computational technique at your disposal</em> as it quickly provides a much improved description of a span, <em>any span</em> (row space, column space, <ellipsis />).</p>
+
 <p><acroref type="theorem" acro="BRS" /> and the techniques of <acroref type="example" acro="IAS" /> will provide yet another description of the column space of a matrix.  First we state a triviality as a theorem, so we can reference it later.</p>
 
 <theorem acro="CSRST" index="column space!as row space">
 </proof>
 </theorem>
 
-<p>So to find another expression for the column space of a matrix, build its transpose, row-reduce it, toss out the zero rows, and convert the nonzero rows to column vectors to yield an improved set for the span construction.  We'll do <acroref type="archetype" acro="I" />, then you do <acroref type="archetype" acro="J" />.</p>
+<p>So to find another expression for the column space of a matrix, build its transpose, row-reduce it, toss out the zero rows, and convert the nonzero rows to column vectors to yield an improved set for the span construction.  We will do <acroref type="archetype" acro="I" />, then you do <acroref type="archetype" acro="J" />.</p>
 
 <example acro="CSROI" index="column space!row operations, Archetype I">
 <title>Column space from row operations, Archetype I</title>
 </output>
 </sage>
 
-We begin with the same four vectors in <acroref type="example" acro="IAS" /> and create their span, the vector space <code>X</code>.  The matrix <code>A</code> has these four vectors as rows and <code>B</code> is the reduced row-echelon form of <code>A</code>.  Then the row spaces of <code>A</code> and <code>B</code> are equal to the vector space <code>X</code> (and each other).  The way Sage describes this vector space is with a matrix whose rows <em>are the non-zero rows of the reduced row-echelon form of the matrix</em> <code>A</code>.  This is <acroref type="theorem" acro="BRS" /> in action where we go with Sage's penchant for rows and ignore the text's penchant for columns.<br /><br />
+We begin with the same four vectors in <acroref type="example" acro="IAS" /> and create their span, the vector space <code>X</code>.  The matrix <code>A</code> has these four vectors as rows and <code>B</code> is the reduced row-echelon form of <code>A</code>.  Then the row spaces of <code>A</code> and <code>B</code> are equal to the vector space <code>X</code> (and each other).  The way Sage describes this vector space is with a matrix whose rows <em>are the nonzero rows of the reduced row-echelon form of the matrix</em> <code>A</code>.  This is <acroref type="theorem" acro="BRS" /> in action where we go with Sage's penchant for rows and ignore the text's penchant for columns.<br /><br />
 We can illustrate a few other results about row spaces with Sage.  Discussion follows.
 <sage>
 <input>A = matrix(QQ,[[1,  1, 0, -1,  1,  1, 0],
 <exercisesubsection>
 
 <exercise type="C" number="20" rough="Repeated calculations of C(A) and R(A);  lead to rank">
-<problem contributor="chrisblack">For parts (1), (2) and (3), find a set of linearly independent vectors $X$ so that $\csp{A} = \spn{X}$, and a set of linearly independent vectors $Y$ so that $\rsp{A} = \spn{Y}$.
-<ol>
-<li>$A = <![CDATA[\begin{bmatrix} 1 & 2 & 3 & 1 \\ 0 & 1 & 1 & 2\\ 1 & -1 & 2 & 3 \\ 1 & 1 & 2 & -1 \end{bmatrix}]]>$</li>
-<li>$A = <![CDATA[\begin{bmatrix}  1 & 2 & 1 & 1 & 1 \\ 3 & 2 & -1 & 4 & 5\\ 0 & 1 & 1 & 1 & 2 \end{bmatrix}]]>$</li>
-<li>$A = <![CDATA[\begin{bmatrix}  2 & 1 & 0 \\ 3 & 0 & 3\\ 1 & 2 & -3 \\ 1 & 1 & -1 \\ 1 & 1 & -1\end{bmatrix}]]>$</li>
-<li>From your results in parts (1) - (3), can you formulate a conjecture about the sets $X$ and $Y$?</li>
-</ol>
+<problem contributor="chrisblack">For each matrix below, find a set of linearly independent vectors $X$ so that $\spn{X}$ equals the clolumn space of the matrix, and a set of linearly independent vectors $Y$ so that $\spn{Y}$ equals the row space of the matrix.
+<alignmath>A&amp;=<![CDATA[\begin{bmatrix} 1 & 2 & 3 & 1 \\ 0 & 1 & 1 & 2\\ 1 & -1 & 2 & 3 \\ 1 & 1 & 2 & -1 \end{bmatrix}]]>
+&amp;
+B&amp;=<![CDATA[\begin{bmatrix}  1 & 2 & 1 & 1 & 1 \\ 3 & 2 & -1 & 4 & 5\\ 0 & 1 & 1 & 1 & 2 \end{bmatrix}]]>
+&amp;
+C&amp;=<![CDATA[\begin{bmatrix}  2 & 1 & 0 \\ 3 & 0 & 3\\ 1 & 2 & -3 \\ 1 & 1 & -1 \\ 1 & 1 & -1\end{bmatrix}]]></alignmath>
+From your results for these three matrices, can you formulate a conjecture about the sets $X$ and $Y$?
 </problem>
 </exercise>
 
 </equation>
 <em>was</em> in the column space of $A$.   Attempt to write $\vect{c}$ and $\vect{b}$ as linear combinations of the two vectors in the span construction for the column space in <acroref type="example" acro="CSOCD" /> and record your observations.
 </problem>
-<solution contributor="robertbeezer">In each case, begin with a vector equation where one side contains a linear combination of the two vectors from the span construction that gives the column space of $A$ with unknowns for scalars, and then use <acroref type="theorem" acro="SLSLC" /> to set up a system of equations.  For $\vect{c}$, the corresponding system has no solution, as we would expect.<br /><br />
+<solution contributor="robertbeezer">In each case, begin with a vector equation where one side contains a linear combination of the two vectors from the span construction that gives the column space of $A$ with unknowns for scalars, and then use <acroref type="theorem" acro="SLSLC" /> to set up a system of equations.  For $\vect{c}$, the resulting system has no solution, as we would expect.<br /><br />
 For $\vect{b}$ there is a solution, as we would expect.  What is interesting is that the solution is unique.  This is a consequence of the linear independence of the set of two vectors in the span construction.  If we wrote $\vect{b}$ as a linear combination of all four columns of $A$, then there would be infinitely many ways to do this.
 </solution>
 </exercise>
 \end{bmatrix}
 </equation>
 </problem>
-<solution contributor="robertbeezer"><acroref type="theorem" acro="BCS" /> is the right tool for this problem.  Row-reduce this matrix, identify the pivot columns and then grab the corresponding columns of $A$ for the set $T$.  The matrix $A$ row-reduces to
+<solution contributor="robertbeezer"><acroref type="theorem" acro="BCS" /> is the right tool for this problem.  Row-reduce this matrix, identify the pivot columns and then grab the columns of $A$ with the same indices for the set $T$.  The matrix $A$ row-reduces to
 <equation>
 \begin{bmatrix}
 <![CDATA[ \leading{1} & 0 & 3 & 0 & 0 \\]]>
 \end{bmatrix}
 </equation>
 </problem>
-<solution contributor="robertbeezer"><acroref type="theorem" acro="BRS" /> is the most direct route to a set with these properties.  Row-reduce, toss zero rows, keep the others.  You could also transpose the matrix, then look for the column space by row-reducing the transpose and applying <acroref type="theorem" acro="BCS" />.  We'll do the former,
+<solution contributor="robertbeezer"><acroref type="theorem" acro="BRS" /> is the most direct route to a set with these properties.  Row-reduce, toss zero rows, keep the others.  You could also transpose the matrix, then look for the column space by row-reducing the transpose and applying <acroref type="theorem" acro="BCS" />.  We will do the former,
 <equation>
 B\rref
 \begin{bmatrix}
 <![CDATA[ 0 & 0 & 0 & \leading{1}]]>
 \end{bmatrix}
 </alignmath>
-and with a leading 1 in the final column <acroref type="theorem" acro="RCLS" /> tells us the linear system is inconsistent and so $\vect{y}\not\in\rsp{A}$.
+and since the final column is a pivot column <acroref type="theorem" acro="RCLS" /> tells us the linear system is inconsistent and so $\vect{y}\not\in\rsp{A}$.
 </solution>
 </exercise>
 
 <![CDATA[0 & 0 & \leading{1} & 1]]>
 \end{bmatrix}
 </equation>
-If we augment $E$ with any vector of constants, and row-reduce the augmented matrix, we will never find a leading 1 in the final column, so by <acroref type="theorem" acro="RCLS" /> the system will always be consistent.<br /><br />
+If we augment $E$ with any vector of constants, and row-reduce the augmented matrix, we will never find a pivot column in the final column, so by <acroref type="theorem" acro="RCLS" /> the system will always be consistent.<br /><br />
 Said another way, the column space of $E$ is all of $\complex{3}$, $\csp{E}=\complex{3}$.  So by <acroref type="theorem" acro="CSCS" /> any vector of constants will create a consistent system (and none will create an inconsistent system).
 </solution>
 </exercise>
 This is <em>not</em> not a legitimate procedure, and therefore is <em>not</em> a theorem.  Construct an example to show that the procedure will not in general create the column space of $A$.
 </problem>
 <solution contributor="robertbeezer">Begin with a matrix $A$ (of any size) that does not have any zero rows, but which when row-reduced to $B$ yields at least one row of zeros.  Such a matrix should be easy to construct (or find, like say from <acroref type="archetype" acro="A" />).<br /><br />
-$\csp{A}$ will contain some vectors whose final slot (entry $m$) is non-zero, however, every column vector from the matrix $B$ will have a zero in slot $m$ and so every vector in $\csp{B}$ will also contain a zero in the final slot.  This means that $\csp{A}\neq\csp{B}$, since we have vectors in $\csp{A}$ that cannot be elements of $\csp{B}$.
+$\csp{A}$ will contain some vectors whose final slot (entry $m$) is nonzero, however, every column vector from the matrix $B$ will have a zero in slot $m$ and so every vector in $\csp{B}$ will also contain a zero in the final slot.  This means that $\csp{A}\neq\csp{B}$, since we have vectors in $\csp{A}$ that cannot be elements of $\csp{B}$.
 </solution>
 </exercise>
 

File src/section-FS.xml

 <subsection acro="CCS">
 <title>Computing Column Spaces</title>
 
-<p>We have three ways to build the column space of a matrix.  First, we can use just the definition, <acroref type="definition" acro="CSM" />, and express the column space as a span of the columns of the matrix.  A second approach gives us the column space as the span of <em>some</em> of the columns of the matrix, but this set is linearly independent (<acroref type="theorem" acro="BCS" />).  Finally, we can transpose the matrix, row-reduce the transpose, kick out zero rows, and transpose the remaining rows back into column vectors.  <acroref type="theorem" acro="CSRST" /> and <acroref type="theorem" acro="BRS" /> tell us that the resulting vectors are linearly independent and their span is the column space of the original matrix.</p>
+<p>We have three ways to build the column space of a matrix.  First, we can use just the definition, <acroref type="definition" acro="CSM" />, and express the column space as a span of the columns of the matrix.  A second approach gives us the column space as the span of <em>some</em> of the columns of the matrix, and additionally, this set is linearly independent (<acroref type="theorem" acro="BCS" />).  Finally, we can transpose the matrix, row-reduce the transpose, kick out zero rows, and write the remaining rows as column vectors.  <acroref type="theorem" acro="CSRST" /> and <acroref type="theorem" acro="BRS" /> tell us that the resulting vectors are linearly independent and their span is the column space of the original matrix.</p>
 
 <p>We will now demonstrate a fourth method by way of a rather complicated example.  Study this example carefully, but realize that its main purpose is to motivate a theorem that simplifies much of the apparent complexity.  So other than an instructive exercise or two, the procedure we are about to describe will not be a usual approach to computing a column space.</p>
 
 </equation>
 </p>
 
-<p>To identify solutions we will bring this matrix to reduced row-echelon form.  Despite the presence of variables in the last column, there is nothing to stop us from doing this, except our numerical routines on calculators can't be used, and even some of the symbolic algebra routines do some unexpected maneuvers with this computation.  So do it by hand.  Yes, it is a bit of work.  But worth it.  We'll still be here when you get back.  Notice along the way that the row operations are <em>exactly</em> the same ones you would do if you were just row-reducing the coefficient matrix alone, say in connection with a homogeneous system of equations.  The column with the $b_i$ acts as a sort of bookkeeping device.  There are many different possibilities for the result, depending on what order you choose to perform the row operations, but shortly we'll all be on the same page.  Here's one possibility (you can find this same result by doing additional row operations with the fifth and sixth rows to remove any occurrences of $b_1$ and $b_2$ from the first four rows of your result):
+<p>To identify solutions we will bring this matrix to reduced row-echelon form.  Despite the presence of variables in the last column, there is nothing to stop us from doing this, except our numerical routines on calculators cannot be used, and even some of the symbolic algebra routines do some unexpected maneuvers with this computation.  So do it by hand.  Yes, it is a bit of work.  But worth it.  We'll still be here when you get back.  Notice along the way that the row operations are <em>exactly</em> the same ones you would do if you were just row-reducing the coefficient matrix alone, say in connection with a homogeneous system of equations.  The column with the $b_i$ acts as a sort of bookkeeping device.  There are many different possibilities for the result, depending on what order you choose to perform the row operations, but shortly we will all be on the same page.  If you want to match our work right now, use row 5 to remove any occurence of $b_1$ from the other entries of the last column, and use row 6 to remove any occurence of $b_2$ from the last columns.we have:
 <alignmath>
 \begin{bmatrix}
 <![CDATA[ \leading{1} & 0 & 0 & 0 & 2 & b_3 - b_4 + 2 b_5 - b_6\\]]>
 </alignmath>
 </p>
 
-<p>Our goal is to identify those vectors $\vect{b}$ which make $\linearsystem{A}{\vect{b}}$ consistent.  By <acroref type="theorem" acro="RCLS" /> we know that the consistent systems are precisely those without a leading 1 in the last column.  Are the expressions in the last column of rows 5 and 6 equal to zero, or are they leading 1's?  The answer is: maybe.  It depends on $\vect{b}$.  With a nonzero value for either of these expressions, we would scale the row and produce a leading 1.   So we get a consistent system, and $\vect{b}$ is in the column space, if and only if these two expressions are both simultaneously zero.  In other words, members of the column space of $A$ are exactly those vectors $\vect{b}$ that satisfy
+<p>Our goal is to identify those vectors $\vect{b}$ which make $\linearsystem{A}{\vect{b}}$ consistent.  By <acroref type="theorem" acro="RCLS" /> we know that the consistent systems are precisely those without a pivot column in the last column.  Are the expressions in the last column of rows 5 and 6 equal to zero, or are they leading 1's?  The answer is: maybe.  It depends on $\vect{b}$.  With a nonzero value for either of these expressions, we would scale the row and produce a leading 1.   So we get a consistent system, and $\vect{b}$ is in the column space, if and only if these two expressions are both simultaneously zero.  In other words, members of the column space of $A$ are exactly those vectors $\vect{b}$ that satisfy
 <alignmath>
 <![CDATA[b_1 + 3 b_3 - b_4 + 3 b_5 + b_6 & = 0\\]]>
 <![CDATA[b_2 - 2 b_3 + b_4 + b_5 - b_6 & = 0]]>
 
 <p>Whew!  As a postscript to this central example, you may wish to convince yourself that the four vectors above really are elements of the column space.  Do they create consistent systems with $A$ as coefficient matrix?  Can you recognize the constant vector in your description of these solution sets?</p>
 
-<p>OK, that was so much fun, let's do it again.  But simpler this time.  And we'll all get the same results all the way through.  Doing row operations by hand with variables can be a bit error prone, so let's see if we can improve the process some.  Rather than row-reduce a column vector $\vect{b}$ full of variables, let's write $\vect{b}=I_6\vect{b}$ and we will row-reduce the matrix $I_6$ and when we finish row-reducing, <em>then</em> we will compute the matrix-vector product.  You should first convince yourself that we can operate like this (this is the subject of a future homework exercise).
+<p>OK, that was so much fun, let's do it again.  But simpler this time.  And we will all get the same results all the way through.  Doing row operations by hand with variables can be a bit error prone, so let's see if we can improve the process some.  Rather than row-reduce a column vector $\vect{b}$ full of variables, let's write $\vect{b}=I_6\vect{b}$ and we will row-reduce the matrix $I_6$ and when we finish row-reducing, <em>then</em> we will compute the matrix-vector product.  You should first convince yourself that we can operate like this (this is the subject of a future homework exercise).
 <!-- %TODO (see <acroref type="exercise" acro="XX-commutingops-todo" /> on commuting operations). -->
 Rather than augmenting $A$ with $\vect{b}$, we will instead augment it with $I_6$ (does this feel familiar?),
 <equation>
 </equation>
 </p>
 
-<p>We want to row-reduce the left-hand side of this matrix, but we will apply the same row operations to the right-hand side as well.  And once we get the left-hand side in reduced row-echelon form, we will continue on to put leading 1's in the final two rows, as well as clearing out the columns containing those two additional leading 1's.  It is these additional row operations that will ensure that we all get to the same place, since the reduced row-echelon form is unique (<acroref type="theorem" acro="RREFU" />),
+<p>We want to row-reduce the left-hand side of this matrix, but we will apply the same row operations to the right-hand side as well.  And once we get the left-hand side in reduced row-echelon form, we will continue on to put leading 1's in the final two rows, as well as making pivot columns that contain these two additional leading 1's.  It is these additional row operations that will ensure that we all get to the same place, since the reduced row-echelon form is unique (<acroref type="theorem" acro="RREFU" />),
 <equation>
 N=
 \begin{bmatrix}
 <title>Extended Echelon Form</title>
 <p>Suppose $A$ is an $m\times n$ matrix.  Extend $A$ on its right side with the addition of an $m\times m$ identity matrix to form an $m\times (n + m)$ matrix M.  Use row operations to bring $M$ to reduced row-echelon form and call the result $N$.  $N$ is the <define>extended reduced row-echelon form</define> of $A$, and we will standardize on names for five submatrices ($B$, $C$, $J$, $K$, $L$) of $N$.</p>
 
-<p>Let $B$ denote the $m\times n$ matrix formed from the first $n$ columns of $N$ and let $J$ denote the $m\times m$ matrix formed from the last $m$ columns of $N$.  Suppose that $B$ has $r$ nonzero rows.  Further partition $N$ by letting $C$ denote the $r\times n$ matrix formed from all of the non-zero rows of $B$.  Let $K$ be the $r\times m$ matrix formed from the first $r$ rows of $J$, while $L$ will be the $(m-r)\times m$ matrix formed from the bottom $m-r$ rows of $J$.  Pictorially,
+<p>Let $B$ denote the $m\times n$ matrix formed from the first $n$ columns of $N$ and let $J$ denote the $m\times m$ matrix formed from the last $m$ columns of $N$.  Suppose that $B$ has $r$ nonzero rows.  Further partition $N$ by letting $C$ denote the $r\times n$ matrix formed from all of the nonzero rows of $B$.  Let $K$ be the $r\times m$ matrix formed from the first $r$ rows of $J$, while $L$ will be the $(m-r)\times m$ matrix formed from the bottom $m-r$ rows of $J$.  Pictorially,
 <equation>
 M=[A\vert I_m]
 \rref
 N=[B\vert J]
 =
-<![CDATA[\left[\begin{array}{cc}C&K\\0&L\end{array}\right]]]>
+<![CDATA[\left[\begin{array}{c|c}C&K\\\hline0&L\end{array}\right]]]>
 </equation>
 </p>
 
 <proof>
 <p>$J$ is the result of applying a sequence of row operations to $I_m$, and therefore $J$ and $I_m$ are row-equivalent.  $\homosystem{I_m}$ has only the zero solution, since $I_m$ is nonsingular (<acroref type="theorem" acro="NMRRI" />).  Thus, $\homosystem{J}$ also has only the zero solution (<acroref type="theorem" acro="REMES" />, <acroref type="definition" acro="ESYS" />) and $J$ is therefore nonsingular (<acroref type="definition" acro="NSM" />).</p>
 
-<p>To prove the second part of this conclusion, first convince yourself that row operations and the matrix-vector product are commutative operations.  By this we mean the following.
+<p>To prove the second part of this conclusion, first convince yourself that row operations and the matrix-vector product are associative operations.  By this we mean the following.
 Suppose that $F$ is an $m\times n$ matrix that is row-equivalent to the matrix $G$.  Apply to the column vector $F\vect{w}$ the same sequence of row operations that converts $F$ to $G$.  Then the result is $G\vect{w}$.  So we can do row operations on the matrix, then do a matrix-vector product, <em>or</em> do a matrix-vector product and then do row operations on a column vector, and the result will be the same either way.  Since matrix multiplication is defined by a collection of matrix-vector products (<acroref type="definition" acro="MM" />), the matrix product $FH$ will become $GH$ if we apply the same sequence of row operations to $FH$ that convert $F$ to $G$.  (This argument can be made more rigorous using elementary matrices from the upcoming <acroref type="subsection" acro="DM.EM" /> and the associative property of matrix multiplication established in <acroref type="theorem" acro="MMA" />.)  Now apply these observations to $A$.</p>
 
 <p>Write $AI_n=I_mA$ and apply the row operations that convert $M$ to $N$.  $A$ is converted to $B$, while $I_m$ is converted to $J$, so we have $BI_n=JA$.  Simplifying the left side gives the desired conclusion.</p>
 
 <p>The forward direction of the first equivalence is accomplished by multiplying both sides of the matrix equality by $J$, while the backward direction is accomplished by multiplying by the inverse of $J$ (which we know exists by <acroref type="theorem" acro="NI" /> since $J$ is nonsingular).  The second equivalence is obtained simply by the substitutions given by $JA=B$.</p>
 
-<p>The first $r$ rows of $N$ are in reduced row-echelon form, since any contiguous collection of rows taken from a matrix in reduced row-echelon form will form a matrix that is again in reduced row-echelon form.   Since the matrix $C$ is formed by removing the last $n$ entries of each these rows, the remainder is still in reduced row-echelon form.  By its construction, $C$ has no zero rows. $C$ has $r$ rows and each contains a leading 1, so there are $r$ pivot columns in $C$.</p>
+<p>The first $r$ rows of $N$ are in reduced row-echelon form, since any contiguous collection of rows taken from a matrix in reduced row-echelon form will form a matrix that is again in reduced row-echelon form (<acroref type="exercise" acro="RREF.T12" />).   Since the matrix $C$ is formed by removing the last $n$ entries of each these rows, the remainder is still in reduced row-echelon form.  By its construction, $C$ has no zero rows. $C$ has $r$ rows and each contains a leading 1, so there are $r$ pivot columns in $C$.</p>
 
 <p>The final $m-r$ rows of $N$ are in reduced row-echelon form, since any contiguous collection of rows taken from a matrix in reduced row-echelon form will form a matrix that is again in reduced row-echelon form.  Since the matrix $L$ is formed by removing the first $n$ entries of each these rows, and these entries are all zero (they form the zero rows of $B$), the remainder is still in reduced row-echelon form.  $L$ is the final $m-r$ rows of the nonsingular matrix $J$, so none of these rows can be totally zero, or $J$ would not row-reduce to the identity matrix.  $L$ has $m-r$ rows and each contains a leading 1, so there are $m-r$ pivot columns in $L$.</p>
 
 
 <p>The situation when $r=m$ deserves comment, since now the matrix $L$ has no rows.  What is $\csp{A}$ when we try to apply <acroref type="theorem" acro="FS" /> and encounter $\nsp{L}$?  One interpretation of this situation is that $L$ is the coefficient matrix of a homogeneous system that has no equations.  How hard is it to find a solution vector to this system?  Some thought will convince you that <em>any</em> proposed vector will qualify as a solution, since it makes <em>all</em> of the equations true.  So every possible vector is in the null space of $L$ and therefore $\csp{A}=\nsp{L}=\complex{m}$.  OK, perhaps this sounds like some twisted argument from <i>Alice in Wonderland</i>.  Let us try another argument that might solidly convince you of this logic.</p>
 
-<p>If $r=m$, when we row-reduce the augmented matrix of $\linearsystem{A}{\vect{b}}$ the result will have no zero rows, and all the leading 1's will occur in first $n$ columns, so by <acroref type="theorem" acro="RCLS" /> the system will be consistent.  By <acroref type="theorem" acro="CSCS" />, $\vect{b}\in\csp{A}$.  Since $\vect{b}$ was arbitrary, every possible vector is in the column space of $A$, so we again have $\csp{A}=\complex{m}$.  The situation when a matrix has $r=m$ is known by the term <define>full rank</define>, and in the case of a square matrix coincides with nonsingularity (see <acroref type="exercise" acro="FS.M50" />).</p>
+<p>If $r=m$, when we row-reduce the augmented matrix of $\linearsystem{A}{\vect{b}}$ the result will have no zero rows, and the first $n$ columns will all be pivot columns, leaving none for the final column, so by <acroref type="theorem" acro="RCLS" /> the system will be consistent.  By <acroref type="theorem" acro="CSCS" />, $\vect{b}\in\csp{A}$.  Since $\vect{b}$ was arbitrary, every possible vector is in the column space of $A$, so we again have $\csp{A}=\complex{m}$.  The situation when a matrix has $r=m$ is known by the term <define>full rank</define>, and in the case of a square matrix coincides with nonsingularity (see <acroref type="exercise" acro="FS.M50" />).</p>
 
 <p>The properties of the matrix $L$ described by this theorem can be explained informally as follows.  A column vector $\vect{y}\in\complex{m}$ is in the column space of $A$ if the linear system $\linearsystem{A}{\vect{y}}$ is consistent (<acroref type="theorem" acro="CSCS" />).  By <acroref type="theorem" acro="RCLS" />, the reduced row-echelon form of the augmented matrix $\augmented{A}{\vect{y}}$ of a consistent system will have zeros in the bottom $m-r$ locations of the last column.  By <acroref type="theorem" acro="PEEF" /> this final column is the vector $J\vect{y}$ and so should then have zeros in the final $m-r$ locations.  But since $L$ comprises the final $m-r$ rows of $J$, this condition is expressed by saying $\vect{y}\in\nsp{L}$.</p>
 
 
 <sageadvice acro="EEF" index="extended echelon form">
 <title>Extended Echelon Form</title>
-Sage will compute the extended echelon form, an improvement over what we did <q>by hand</q> in <acroref type="sage" acro="MISLE" />.  And the output can be requested as a <q>subdivided</q> matrix so that we can easily work with the pieces $C$ and $L$.  Pieces of subdivided matrices can be extracted with indices entirely similar to how we index the actual entries of a matrix.  We'll redo <acroref type="example" acro="FS2" /> as an illustration of <acroref type="theorem" acro="FS" /> and <acroref type="theorem" acro="PEEF" />.
+Sage will compute the extended echelon form, an improvement over what we did <q>by hand</q> in <acroref type="sage" acro="MISLE" />.  And the output can be requested as a <q>subdivided</q> matrix so that we can easily work with the pieces $C$ and $L$.  Pieces of subdivided matrices can be extracted with indices entirely similar to how we index the actual entries of a matrix.  We will redo <acroref type="example" acro="FS2" /> as an illustration of <acroref type="theorem" acro="FS" /> and <acroref type="theorem" acro="PEEF" />.
 <sage>
 <input>A = matrix(QQ,[[ 10,  0,  3,   8,   7],
                [-16, -1, -4, -10, -13],
 </sageadvice>
 <p><acroref type="example" acro="COV" /> and <acroref type="example" acro="CSROI" /> each describes the column space of the coefficient matrix from <acroref type="archetype" acro="I" /> as the span of a set of $r=3$ linearly independent vectors.  It is no accident that these two different sets both have the same size.  If we (you?) were to calculate the column space of this matrix using the null space of the matrix $L$ from <acroref type="theorem" acro="FS" /> then we would again find a set of 3 linearly independent vectors that span the range.  More on this later.</p>
 
-<p>So we have three different methods to obtain a description of the column space of a matrix as the span of a linearly independent set.  <acroref type="theorem" acro="BCS" /> is sometimes useful since the vectors it specifies are equal to actual columns of the matrix. <acroref type="theorem" acro="BRS" /> and <acroref type="theorem" acro="CSRST" /> combine to create vectors with lots of zeros, and strategically placed 1's near the top of the vector.   <acroref type="theorem" acro="FS" /> and the matrix $L$ from the extended echelon form gives us a third method, which tends to create vectors with lots of zeros, and strategically placed 1's near the bottom of the vector.   If we don't care about linear independence we can also appeal to <acroref type="definition" acro="CSM" /> and simply express the column space as the span of all the columns of the matrix, giving us a fourth description.</p>
+<p>So we have three different methods to obtain a description of the column space of a matrix as the span of a linearly independent set.  <acroref type="theorem" acro="BCS" /> is sometimes useful since the vectors it specifies are equal to actual columns of the matrix. <acroref type="theorem" acro="BRS" /> and <acroref type="theorem" acro="CSRST" /> combine to create vectors with lots of zeros, and strategically placed 1's near the top of the vector.   <acroref type="theorem" acro="FS" /> and the matrix $L$ from the extended echelon form gives us a third method, which tends to create vectors with lots of zeros, and strategically placed 1's near the bottom of the vector.   If we do not care about linear independence we can also appeal to <acroref type="definition" acro="CSM" /> and simply express the column space as the span of all the columns of the matrix, giving us a fourth description.</p>
 
 <p>With <acroref type="theorem" acro="CSRST" /> and <acroref type="definition" acro="RSM" />, we can compute column spaces with theorems about row spaces, and we can compute row spaces with theorems about column spaces, but in each case we must transpose the matrix first.  At this point you may be overwhelmed by all the possibilities for computing column and row spaces.  <acroref type="diagram" acro="CSRST" /> is meant to help.  For both the column space and row space, it suggests four techniques.  One is to appeal to the definition, another yields a span of a linearly independent set, and a third uses <acroref type="theorem" acro="FS" />.  A fourth suggests transposing the matrix and the dashed line implies that then the companion set of techniques can be applied.  This can lead to a bit of silliness, since if you were to follow the dashed lines <em>twice</em> you would transpose the matrix twice, and by <acroref type="theorem" acro="TT" /> would accomplish nothing productive.
 <diagram acro="CSRST">
 \colvector{1\\1\\-4}
 }
 </equation>
-<acroref type="theorem" acro="BCS" /> suggests row-reducing the matrix and using the columns of $B$ that correspond to the pivot columns.
+<acroref type="theorem" acro="BCS" /> suggests row-reducing the matrix and using the columns of $B$ with the same indices as the pivot columns.
 <equation>
 B\rref
 \begin{bmatrix}
 <![CDATA[0 & 0 & 0 & 0]]>
 \end{bmatrix}
 </equation>
-By <acroref type="theorem" acro="BCS" /> we can choose the columns of $A$ that correspond to dependent variables ($D=\set{1,2}$) as the elements of $S$ and obtain the desired properties.  So
+By <acroref type="theorem" acro="BCS" /> we can choose the columns of $A$ that have the same indices as the pivot columns ($D=\set{1,2}$) as the elements of $S$ and obtain the desired properties.  So
 <equation>
 S=\set{\colvector{2\\-5\\1},\,\colvector{-1\\3\\1}}
 </equation>

File src/section-MINM.xml

 <!-- % -->
 <!-- %%%%%%%%%% -->
 <introduction>
-<p>We saw in <acroref type="theorem" acro="CINM" /> that if a square matrix $A$ is nonsingular, then there is a matrix $B$ so that $AB=I_n$.  In other words, $B$ is halfway to being an inverse of $A$.  We will see in this section that $B$ automatically fulfills the second condition ($BA=I_n$).  <acroref type="example" acro="MWIAA" /> showed us that the coefficient matrix from <acroref type="archetype" acro="A" /> had no inverse.  Not coincidentally, this coefficient matrix is singular.  We'll make all these connections precise now.  Not many examples or definitions in this section, just theorems.</p>
+<p>We saw in <acroref type="theorem" acro="CINM" /> that if a square matrix $A$ is nonsingular, then there is a matrix $B$ so that $AB=I_n$.  In other words, $B$ is halfway to being an inverse of $A$.  We will see in this section that $B$ automatically fulfills the second condition ($BA=I_n$).  <acroref type="example" acro="MWIAA" /> showed us that the coefficient matrix from <acroref type="archetype" acro="A" /> had no inverse.  Not coincidentally, this coefficient matrix is singular.  We will make all these connections precise now.  Not many examples or definitions in this section, just theorems.</p>
 
 </introduction>
 
 </statement>
 
 <proof>
-<p><implyforward /> We'll do this portion of the proof in two parts, each as a proof by contradiction (<acroref type="technique" acro="CD" />).  Assume that $AB$ is nonsingular. Establishing that $B$ is nonsingular is the easier part, so we will do it first, but in reality, we will <em>need</em> to know that $B$ is nonsingular when we prove that $A$ is nonsingular.</p>
+<p><implyforward /> We will do this portion of the proof in two parts, each as a proof by contradiction (<acroref type="technique" acro="CD" />).  Assume that $AB$ is nonsingular. Establishing that $B$ is nonsingular is the easier part, so we will do it first, but in reality, we will <em>need</em> to know that $B$ is nonsingular when we prove that $A$ is nonsingular.</p>
 
 <p>You can also think of this proof as being a study of four possible conclusions in the table below.  One of the four rows <em>must</em> happen (the list is exhaustive).  In the proof we learn that the first three rows lead to contradictions, and so are impossible.  That leaves the fourth row as a certainty, which is our desired conclusion.
 
 <p>So <acroref type="theorem" acro="OSIS" /> tells us that if $A$ is nonsingular, then the matrix $B$ guaranteed by <acroref type="theorem" acro="CINM" /> will be both a <q>right-inverse</q> and a <q>left-inverse</q> for $A$, so $A$ is invertible and $\inverse{A}=B$.</p>
 
 <p>So if you have a nonsingular matrix, $A$, you can use the procedure described in <acroref type="theorem" acro="CINM" /> to find an inverse for $A$.  If $A$ is singular, then the procedure in <acroref type="theorem" acro="CINM" /> will fail as the first $n$ columns of $M$ will not row-reduce to the identity matrix.  However, we can say a bit more.  When $A$ is singular, then $A$ does not have an inverse (which is very different from saying that the procedure in <acroref type="theorem" acro="CINM" /> fails to find an inverse).
-This may feel like we are splitting hairs, but it's important that we do not make unfounded assumptions.  These observations motivate the next theorem.</p>
+This may feel like we are splitting hairs, but it is important that we do not make unfounded assumptions.  These observations motivate the next theorem.</p>
 
 <theorem acro="NI" index="matrix inverse!nonsingular matrix">
 <title>Nonsingularity is Invertibility</title>
 </statement>
 
 <proof>
-<p>By <acroref type="theorem" acro="NMUS" /> we know already that $\linearsystem{A}{\vect{b}}$ has a unique solution for every choice of $\vect{b}$.  We need to show that the expression stated is indeed a solution (<em>the</em> solution).  That's easy, just <q>plug it in</q> to the corresponding vector equation representation (<acroref type="theorem" acro="SLEMM" />),
+<p>By <acroref type="theorem" acro="NMUS" /> we know already that $\linearsystem{A}{\vect{b}}$ has a unique solution for every choice of $\vect{b}$.  We need to show that the expression stated is indeed a solution (<em>the</em> solution).  That's easy, just <q>plug it in</q> to the vector equation representation of the system (<acroref type="theorem" acro="SLEMM" />),
 <alignmath>
 A\left(\inverse{A}\vect{b}\right)
 <![CDATA[&=\left(A\inverse{A}\right)\vect{b}&&]]><acroref type="theorem" acro="MMA" />\\
 
 </definition>
 
-<p>This condition may seem rather far-fetched at first glance.  Would there be <em>any</em> matrix that behaved this way?  Well, yes, here's one.</p>
+<p>This condition may seem rather far-fetched at first glance.  Would there be <em>any</em> matrix that behaved this way?  Well, yes, here is one.</p>
 
 <example acro="UM3" index="unitary!size 3">
 <title>Unitary matrix of size 3</title>
 <!--   Above example from Mathematica input: -->
 <!--   GramSchmidt[{{1 + I, 1 - I, I}, {I, 1 + I, -I}, {I, -1 + I, 1}}, -->
 <!--                        InnerProduct -> (Conjugate[#1].#2 &)] // Simplify -->
-<p>Unitary matrices do not have to look quite so gruesome.  Here's a larger one that is a bit more pleasing.</p>
+<p>Unitary matrices do not have to look quite so gruesome.  Here is a larger one that is a bit more pleasing.</p>
 
 <example acro="UPM" index="unitary!permutation matrix">
 <title>Unitary permutation matrix</title>
 
 
 </sageadvice>
-<p>A final reminder:  the terms <q>dot product,</q> <q>symmetric matrix</q> and <q>orthogonal matrix</q> used in reference to vectors or matrices with real number entries correspond to the terms <q>inner product,</q> <q>Hermitian matrix</q> and <q>unitary matrix</q> when we generalize to include complex number entries, so keep that in mind as you read elsewhere.</p>
+<p>A final reminder:  the terms <q>dot product,</q> <q>symmetric matrix</q> and <q>orthogonal matrix</q> used in reference to vectors or matrices with real number entries are special cases of the terms <q>inner product,</q> <q>Hermitian matrix</q> and <q>unitary matrix</q> that we use for vectors or matrices with complex number entries, so keep that in mind as you read elsewhere.</p>
 
 </subsection>
 

File src/section-MISLE.xml

 </equation>
 which allows us to recognize the inconsistency by <acroref type="theorem" acro="RCLS" />.</p>
 
-<p>So the assumption of $A$'s inverse leads to a logical inconsistency (the system ca not be both consistent and inconsistent), so our assumption is false.  $A$ is not invertible.</p>
+<p>So the assumption of $A$'s inverse leads to a logical inconsistency (the system cannot be both consistent and inconsistent), so our assumption is false.  $A$ is not invertible.</p>
 
 <p>It is possible this example is less than satisfying.  Just where did that particular choice of the vector $\vect{b}$ come from anyway?  Stay tuned for an application of the future <acroref type="theorem" acro="CSCS" /> in <acroref type="example" acro="CSAA" />.</p>