Typo: Theorem TTMI, "has an inverse" (Jenna Fontaine)
Typo: Sage CSCS, "coeffiicent" to "coefficient" (Jenna Fontaine)
Typo: Example CSTW, "columns of a matrix" (Jenna Fontaine)
+Typo: Sage CSOC, "linearly idependent" to "linearly independent" (Jenna Fontaine)
We see that <code>A</code> has four pivot columns, numbered <code>0,1,2,4</code>. The matrix <code>B</code> is just a convenience to hold the pivot columns of <code>A</code>. However, the column spaces of <code>A</code> and <code>B</code> should be equal, as Sage verifies. Also <code>B</code> will row-reduce to the same 0-1 pivot columns of the reduced row-echelon form of the full matrix <code>A</code>. So it is no accident that the reduced row-echelon form of <code>B</code> is a full identity matrix, followed by sufficiently many zero rows to give the matrix the correct size.<br /><br />
The vector space method <code>.span_of_basis()</code> is new to us. It creates a span of a set of vectors, as before, but we now are responsible for supplying a linearly independent set of vectors. Which we have done. We know this because <acroref type="theorem" acro="BCS" /> guarantees the set we provided is linearly independent (and spans the column space), while Sage would have given us an error if we had provided a linearly dependent set. In return, Sage will carry this linearly independent spanning set along with the vector space, something Sage calls a <q>user basis.</q><br /><br />
-Notice how <code>cs</code> has two linearly independent spanning sets now. Our set of <q>original columns</q> is obtained via the standard vector space method <code>.basis()</code> and we can obtain a linearly independent spanning set that looks more familiar with the vector space method <code>.echelonized_basis()</code>. For a vector space created with a simple <code>.span()</code> construction these two commands would yield identical results <mdash /> it is only when we supply a linearly idependent spanning set with the <code>.span_of_basis()</code> method that a <q>user basis</q> becomes relevant.
+Notice how <code>cs</code> has two linearly independent spanning sets now. Our set of <q>original columns</q> is obtained via the standard vector space method <code>.basis()</code> and we can obtain a linearly independent spanning set that looks more familiar with the vector space method <code>.echelonized_basis()</code>. For a vector space created with a simple <code>.span()</code> construction these two commands would yield identical results <mdash /> it is only when we supply a linearly independent spanning set with the <code>.span_of_basis()</code> method that a <q>user basis</q> becomes relevant.
Finally, we check that <code>cs</code> is indeed the column space of <code>A</code> (we knew it would be) and then we provide a one-line, totally general construction of the column space using original columns.
This is an opportunity to make an interesting observation, which could be used to substantiate several theorems. When we take the original columns that we recognize as pivot columns, and use them alone to form a matrix, this new matrix <em>will always</em> row-reduce to an identity matrix followed by zero rows. This is basically a consequence of reduced row-echelon form. Evaluate the compute cell below repeatedly. The number of columns could in theory change, though this is unlikely since the columns of a random matrix are unlikely to be linearly dependent. In any event, the form of the result will always be an identity matrix followed by some zero rows.