Commits

Minor edits, Section LC

• Participants
• Parent commits 7dc62ce

File changes.txt

v3.xx 2013//
~~~~~~~~~~~~~~~~
Edit: Minor edits in Sections WILA, SSLE, RREF, TSS, HSE, NM
-Edit: Minor edits in Section VO
+Edit: Minor edits in Section VO, LC
Edit: Extended slightly the conclusion of Theorem HSC
Change: Theorem ISRN demoted to Exercise TSS.T11
Change: Prefer "pivot columns" over "leading 1", Chapter SLE
+Typo: C^n in statement of Theorem SLSLC

v3.10 2013/08/20
~~~~~~~~~~~~~~~~

File src/section-LC.xml

<!-- % -->
<!-- %%%%%%%%%% -->
<introduction>
-<p>In <acroref type="section" acro="VO" /> we defined vector addition and scalar multiplication.  These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.</p>
-
+<p>In <acroref type="section" acro="VO" /> we defined vector addition and scalar multiplication.  These two operations combine nicely to give us a construction known as a <define>linear combination</define>, a construct that we will work with throughout this course.</p>
</introduction>

<subsection acro="LC">
</alignmath>
</p>

-<p>Notice how we could keep our set of vectors fixed, and use different sets of scalars to construct different vectors.  You might build a few new linear combinations of $\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4$ right now.  We'll be right here when you get back.  What vectors were you able to create?  Do you think you could create the vector
+<p>Notice how we could keep our set of vectors fixed, and use different sets of scalars to construct different vectors.  You might build a few new linear combinations of $\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4$ right now.  We will be right here when you get back.  What vectors were you able to create?  Do you think you could create the vector $\vect{w}$ with a <q>suitable</q> choice of four scalars?
<!-- % lin combo w/ coefficients 2, 3, 1 -2 -->
-<equation>
-\vect{w}=\colvector{13\\15\\5\\-17\\2\\25}
-</equation>
-with a <q>suitable</q> choice of four scalars?  Do you think you could create <em>any</em> possible vector from $\complex{6}$ by choosing the proper scalars?  These last two questions are very fundamental, and time spent considering them <em>now</em> will prove beneficial later.</p>
+<equation>\vect{w}=\colvector{13\\15\\5\\-17\\2\\25}</equation>
+Do you think you could create <em>any</em> possible vector from $\complex{6}$ by choosing the proper scalars?  These last two questions are very fundamental, and time spent considering them <em>now</em> will prove beneficial later.</p>

</example>

\colvector{-33\\24\\5}
</equation></p>

-<p>Now we can rewrite each of these $n=3$ vectors as a scalar multiple of a fixed vector, where the scalar is one of the unknown variables, converting the left-hand side into a linear combination
+<p>Now we can rewrite each of these vectors as a scalar multiple of a fixed vector, where the scalar is one of the unknown variables, converting the left-hand side into a linear combination
<equation>
x_1\colvector{-7\\5\\1}+
x_2\colvector{-6\\5\\0}+
\colvector{-33\\24\\5}
</equation></p>

-<p>Furthermore, these are the only three scalars that will accomplish this equality, since they come from a unique solution.</p>
+   <p>Furthermore, these are the <em>only</em> three scalars that will accomplish this equality, since they come from a unique solution.</p>

<p>Notice how the three vectors in this example are the columns of the coefficient matrix of the system of equations.  This is our first hint of the important interplay between the vectors that form the columns of a matrix, and the matrix itself.</p>

\colvector{1\\8\\5}
</equation></p>

-<p>Rewrite each of these $n=3$ vectors as a scalar multiple of a fixed vector, where the scalar is one of the unknown variables, converting the left-hand side into a linear combination
+<p>Rewrite each of these vectors as a scalar multiple of a fixed vector, where the scalar is one of the unknown variables, converting the left-hand side into a linear combination
<equation>
x_1\colvector{1\\2\\1}+
x_2\colvector{-1\\1\\1}+
</p>
</example>

-<p>There's  a lot going on in the last two examples.  Come back to them in a while and make some connections with the intervening material.
+<p>There is  a lot going on in the last two examples.  Come back to them in a while and make some connections with the intervening material.
For now, we will summarize and explain some of this behavior with a theorem.</p>

<theorem acro="SLSLC" index="linear combinations!solutions to linear systems">
<title>Solutions to Linear Systems are Linear Combinations</title>
<statement>
-<p>Denote the columns of the $m\times n$ matrix $A$ as the vectors $\vectorlist{A}{n}$.  Then
-$\vect{x}\in\complexes{n}$ is a solution to the linear system of equations $\linearsystem{A}{\vect{b}}$ if and only if $\vect{b}$ equals the linear combination of the columns of $A$ formed with the entries of $\vect{x}$,
+<p>Denote the columns of the $m\times n$ matrix $A$ as the vectors $\vectorlist{A}{n}$.  Then $\vect{x}\in\complex{n}$ is a solution to the linear system of equations $\linearsystem{A}{\vect{b}}$ if and only if $\vect{b}$ equals the linear combination of the columns of $A$ formed with the entries of $\vect{x}$,
<equation>
\vectorentry{\vect{x}}{1}\vect{A}_1+
\vectorentry{\vect{x}}{2}\vect{A}_2+
\vectorentry{\vect{x}}{n}\vect{A}_n
}{i}<![CDATA[&&]]><acroref type="definition" acro="CVA" />
</alignmath>
-Since the components of $\vect{b}$ and the linear combination of the columns of $A$ agree for all $1\leq i\leq m$, <acroref type="definition" acro="CVE" /> tells us that the vectors are equal.</p>
+So the entries of the vector $\vect{b}$, and the entries of the vector that is the linear combination of the columns of $A$, agree for all $1\leq i\leq m$.  By <acroref type="definition" acro="CVE" /> we see that the two vectors are equal, as desired.</p>

</proof>
</theorem>
<subsection acro="VFSS">
<title>Vector Form of Solution Sets</title>

-<p>We have written solutions to systems of equations as column vectors.  For example <acroref type="archetype" acro="B" /> has the solution  $x_1 = -3,\,x_2 = 5,\,x_3 = 2$ which we now write as
+<p>We have written solutions to systems of equations as column vectors.  For example <acroref type="archetype" acro="B" /> has the solution  $x_1 = -3,\,x_2 = 5,\,x_3 = 2$ which we write as
<equation>
\vect{x}=\colvector{x_1\\x_2\\x_3}=\colvector{-3\\5\\2}
</equation></p>

-<p>Now, we will use column vectors and linear combinations to express <em>all</em> of the solutions to a linear system of equations in a compact and understandable way.  First, here's two examples that will motivate our next theorem.  This is a valuable technique, almost the equal of row-reducing a matrix, so be sure you get comfortable with it over the course of this section.</p>
+<p>Now, we will use column vectors and linear combinations to express <em>all</em> of the solutions to a linear system of equations in a compact and understandable way.  First, here are two examples that will motivate our next theorem.  This is a valuable technique, almost the equal of row-reducing a matrix, so be sure you get comfortable with it over the course of this section.</p>

<example acro="VFSAD" index="vector form of solutions!Archetype D">
<title>Vector form of solutions for Archetype D</title>
<p><acroref type="archetype" acro="D" /> is a linear system of 3 equations in 4 variables.  Row-reducing the augmented matrix yields
<equation>
<archetypepart acro="D" part="augmentedreduced" /></equation>
-and we see $r=2$ nonzero rows. Also, $D=\set{1,\,2}$ so the dependent variables are then $x_1$ and $x_2$.  $F=\set{3,\,4,\,5}$ so the two free variables are $x_3$ and $x_4$.  We will express a generic solution for the system by two slightly different methods, though both arrive at the same conclusion.</p>
+and we see $r=2$ pivot columns. Also, $D=\set{1,\,2}$ so the dependent variables are then $x_1$ and $x_2$.  $F=\set{3,\,4,\,5}$ so the two free variables are $x_3$ and $x_4$.  We will express a generic solution for the system by two slightly different methods, though both arrive at the same conclusion.</p>

<p>First, we will decompose (<acroref type="technique" acro="DC" />) a solution vector.  Rearranging each equation represented in the row-reduced form of the augmented matrix by solving for the dependent variable in each row yields the vector equality,
<alignmath>
</alignmath>
</p>

-<p>You'll find the second solution listed in the write-up for <acroref type="archetype" acro="D" />, and you might check the first solution by substituting it back into the original equations.</p>
+<p>You will find the second solution listed in the write-up for <acroref type="archetype" acro="D" />, and you might check the first solution by substituting it back into the original equations.</p>

-<p>While this form is useful for quickly creating solutions, it's even better because it tells us <em>exactly</em> what every solution looks like.  We know the solution set is infinite, which is pretty big, but now we can say that a solution is some multiple of $\colvector{-3\\-1\\1\\0}$ plus a multiple of $\colvector{2\\3\\0\\1}$ plus the fixed vector $\colvector{4\\0\\0\\0}$.  Period.  So it only takes us <em>three</em> vectors to describe the entire infinite solution set, provided we also agree on how to combine the three vectors into a linear combination.</p>
+<p>While this form is useful for quickly creating solutions, it is even better because it tells us <em>exactly</em> what every solution looks like.  We know the solution set is infinite, which is pretty big, but now we can say that a solution is some multiple of $\colvector{-3\\-1\\1\\0}$ plus a multiple of $\colvector{2\\3\\0\\1}$ plus the fixed vector $\colvector{4\\0\\0\\0}$.  Period.  So it only takes us <em>three</em> vectors to describe the entire infinite solution set, provided we also agree on how to combine the three vectors into a linear combination.</p>

</example>

-<p>This is such an important and fundamental technique, we'll do another example.</p>
+<p>This is such an important and fundamental technique, we will do another example.</p>

<example acro="VFS" index="vector form of solutions">
<title>Vector form of solutions</title>
<![CDATA[ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
-and we see $r=4$ nonzero rows. Also, $D=\set{1,\,2,\,5,\,6}$ so the dependent variables are then $x_1,\,x_2,\,x_5,$ and $x_6$.  $F=\set{3,\,4,\,7,\,8}$ so the $n-r=3$ free variables are $x_3,\,x_4$ and $x_7$.  We will express a generic solution for the system by two different methods: both a decomposition and a construction.</p>
+and we see $r=4$ pivot columns. Also, $D=\set{1,\,2,\,5,\,6}$ so the dependent variables are then $x_1,\,x_2,\,x_5,$ and $x_6$.  $F=\set{3,\,4,\,7,\,8}$ so the $n-r=3$ free variables are $x_3,\,x_4$ and $x_7$.  We will express a generic solution for the system by two different methods: both a decomposition and a construction.</p>

<p>First, we will decompose (<acroref type="technique" acro="DC" />) a solution vector.  Rearranging each equation represented in the row-reduced form of the augmented matrix by solving for the dependent variable in each row yields the vector equality,
<alignmath>

<p>Did you think a few weeks ago that you could so quickly and easily list <em>all</em> the solutions to a linear system of 5 equations in 7 variables?</p>

-<p>We'll now formalize the last two (important) examples as a theorem.</p>
+<p>We will now formalize the last two (important) examples as a theorem.  The statement of this theorem is a bit scary, and the proof is scarier.  For now, be sure to convice yourself, by working through the examples and exercises, that the statement just describes the procedure of the two immediately previous examples.</p>

<theorem acro="VFSLS" index="vector form of solutions">
<title>Vector Form of Solutions to Linear Systems</title>
<statement>
<p>Suppose that $\augmented{A}{\vect{b}}$ is the augmented matrix for a consistent linear system $\linearsystem{A}{\vect{b}}$ of $m$ equations in $n$ variables.
-Let $B$ be a row-equivalent $m\times (n+1)$ matrix in reduced row-echelon form. Suppose that $B$ has $r$ nonzero rows,  columns without leading 1's with indices $F=\set{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r},\,n+1}$, and columns with leading 1's (pivot columns) having indices $D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}$.  Define vectors $\vect{c}$, $\vect{u}_j$, $1\leq j\leq n-r$ of size $n$ by
+Let $B$ be a row-equivalent $m\times (n+1)$ matrix in reduced row-echelon form. Suppose that $B$ has $r$ pivot columns, with indices $D=\set{d_1,\,d_2,\,d_3,\,\ldots,\,d_r}$, while the $n-r$ non-pivot columns have indices in $F=\set{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r},\,n+1}$.  Define vectors $\vect{c}$, $\vect{u}_j$, $1\leq j\leq n-r$ of size $n$ by
<alignmath>
<![CDATA[\vectorentry{\vect{c}}{i}&=]]>
\begin{cases}
<p><acroref type="archetype" acro="I" /> is a linear system of $m=4$ equations in $n=7$ variables.  Row-reducing the augmented matrix yields
<equation>
<archetypepart acro="I" part="augmentedreduced" /></equation>
-and we see $r=3$ nonzero rows.  The columns with leading 1's are $D=\{1,\,3,\,4\}$ so the $r$ dependent variables are $x_1,\,x_3,\,x_4$.  The columns without leading 1's are $F=\{2,\,5,\,6,\,7,\,8\}$, so the $n-r=4$ free variables are $x_2,\,x_5,\,x_6,\,x_7$.</p>
+and we see $r=3$ pivot columns, with indices $D=\{1,\,3,\,4\}$.  So the $r=3$ dependent variables are $x_1,\,x_3,\,x_4$.  The non-pivot columns have indices in $F=\{2,\,5,\,6,\,7,\,8\}$, so the $n-r=4$ free variables are $x_2,\,x_5,\,x_6,\,x_7$.</p>

<p>Step 1.  Write the vector of variables ($\vect{x}$) as a fixed vector ($\vect{c}$), plus a linear combination of $n-r=4$ vectors ($\vect{u}_1,\,\vect{u}_2,\,\vect{u}_3,\,\vect{u}_4$), using the free variables as the scalars.
<equation>
</equation>
</p>

-<p>Step 2.  For each free variable, use 0's and 1's to ensure equality for the corresponding entry of the vectors.  Take note of the pattern of 0's and 1's at this stage, because this is the best look you'll have at it.  We'll state an important theorem in the next section and the proof will essentially rely on this observation.
+<p>Step 2.  For each free variable, use 0's and 1's to ensure equality for the corresponding entry of the vectors.  Take note of the pattern of 0's and 1's at this stage, because this is the best look you will have at it.  We will state an important theorem in the next section and the proof will essentially rely on this observation.
<equation>
\vect{x}=\colvector{x_1\\x_2\\x_3\\x_4\\x_5\\x_6\\x_7}=
\colvector{\ \\0\\\ \\\ \\0\\0\\0}+

<p>Even better, we have a description of the infinite solution set, based on just 5 vectors, which we combine in linear combinations to produce solutions.</p>

-<p>Whenever we discuss <acroref type="archetype" acro="I" /> you know that's your cue to go work through <acroref type="archetype" acro="J" /> by yourself.  Remember to take note of the 0/1 pattern at the conclusion of Step 2.  Have fun <mdash /> we won't go anywhere while you're away.</p>
+<p>Whenever we discuss <acroref type="archetype" acro="I" /> you know that is your cue to go work through <acroref type="archetype" acro="J" /> by yourself.  Remember to take note of the 0/1 pattern at the conclusion of Step 2.  Have fun <mdash /> we won't go anywhere while you're away.</p>

</example>

-<p>This technique is so important, that we'll do one more example.  However, an important distinction will be that this system is homogeneous.</p>
+<p>This technique is so important, that we will do one more example.  However, an important distinction will be that this system is homogeneous.</p>

<example acro="VFSAL" index="vector form of solutions!Archetype L">
<title>Vector form of solutions for Archetype L</title>
L=<archetypepart acro="L" part="matrixdefn" /></equation>
</p>

-<p>We'll interpret it here as the coefficient matrix of a homogeneous system and reference this matrix as $L$.  So we are solving the homogeneous system $\linearsystem{L}{\zerovector}$ having $m=5$ equations in $n=5$ variables.  If we built the augmented matrix, we would add a sixth column to $L$ containing all zeros.  As we did row operations, this sixth column would remain all zeros.  So instead we will row-reduce the coefficient matrix, and mentally remember the missing sixth column of zeros.  This row-reduced matrix is
+<p>We will employ this matrix here as the coefficient matrix of a homogeneous system and reference this matrix as $L$.  So we are solving the homogeneous system $\linearsystem{L}{\zerovector}$ having $m=5$ equations in $n=5$ variables.  If we built the augmented matrix, we would add a sixth column to $L$ containing all zeros.  As we did row operations, this sixth column would remain all zeros.  So instead we will row-reduce the coefficient matrix, and mentally remember the missing sixth column of zeros.  This row-reduced matrix is
<equation>
<archetypepart acro="L" part="matrixreduced" /></equation>
-and we see $r=3$ nonzero rows.  The columns with leading 1's are $D=\{1,\,2,\,3\}$ so the $r$ dependent variables are $x_1,\,x_2,\,x_3$.  The columns without leading 1's are $F=\{4,\,5\}$, so the $n-r=2$ free variables are $x_4,\,x_5$.  Notice that if we had included the all-zero vector of constants to form the augmented matrix for the system, then the index 6 would have appeared in the set $F$, and subsequently would have been ignored when listing the free variables.</p>
+and we see $r=3$ pivot columns, with indices $D=\{1,\,2,\,3\}$.  So the $r=3$ dependent variables are $x_1,\,x_2,\,x_3$.  The non-pivot columns have indices $F=\{4,\,5\}$, so the $n-r=2$ free variables are $x_4,\,x_5$. Notice that if we had included the all-zero vector of constants to form the augmented matrix for the system, then the index 6 would have appeared in the set $F$, and subsequently would have been ignored when listing the free variables.  So nothing is lost by not creating an augmented matrix (in the case of a homogenous system).  And maybe it is an improvement, since now <em>every</em> index in $F$ can be used to reference a variable of the linear system.</p>

<p>Step 1.  Write the vector of variables ($\vect{x}$) as a fixed vector ($\vect{c}$), plus a linear combination of $n-r=2$ vectors ($\vect{u}_1,\,\vect{u}_2$), using the free variables as the scalars.
<equation>
x_5\colvector{\ \\\ \\\ \\0\\1}
</equation></p>

-<p>Step 3.  For each dependent variable, use the augmented matrix to formulate an equation expressing the dependent variable as a constant plus multiples of the free variables.  Don't forget about the <q>missing</q> sixth column being full of zeros.  Convert this equation into entries of the vectors that ensure equality for each dependent variable, one at a time.
+<p>Step 3.  For each dependent variable, use the augmented matrix to formulate an equation expressing the dependent variable as a constant plus multiples of the free variables.  Do not forget about the <q>missing</q> sixth column being full of zeros.  Convert this equation into entries of the vectors that ensure equality for each dependent variable, one at a time.
<alignmath>
<![CDATA[x_1&=0-1x_4+2x_5&&\Rightarrow&&]]>
\vect{x}=\colvector{x_1\\x_2\\x_3\\x_4\\x_5}=
</proof>
</theorem>

-<p>After proving <acroref type="theorem" acro="NMUS" /> we commented (insufficiently) on the negation of one half of the theorem.  Nonsingular coefficient matrices lead to unique solutions for every choice of the vector of constants.  What does this say about singular matrices?  A singular matrix $A$ has a nontrivial null space (<acroref type="theorem" acro="NMTNS" />).  For a given vector of constants, $\vect{b}$, the system $\linearsystem{A}{\vect{b}}$ could be inconsistent, meaning there are no solutions.  But if there is at least one solution ($\vect{w}$), then <acroref type="theorem" acro="PSPHS" /> tells us there will be infinitely many solutions because of the role of the infinite null space for a singular matrix.  So a system of equations with a singular coefficient matrix <em>never</em> has a unique solution.  Either there are no solutions, or infinitely many solutions, depending on the choice of the vector of constants ($\vect{b}$).</p>
+<p>After proving <acroref type="theorem" acro="NMUS" /> we commented (insufficiently) on the negation of one half of the theorem.  Nonsingular coefficient matrices lead to unique solutions for every choice of the vector of constants.  What does this say about singular matrices?  A singular matrix $A$ has a nontrivial null space (<acroref type="theorem" acro="NMTNS" />).  For a given vector of constants, $\vect{b}$, the system $\linearsystem{A}{\vect{b}}$ could be inconsistent, meaning there are no solutions.  But if there is at least one solution ($\vect{w}$), then <acroref type="theorem" acro="PSPHS" /> tells us there will be infinitely many solutions because of the role of the infinite null space for a singular matrix.  So a system of equations with a singular coefficient matrix <em>never</em> has a unique solution.  Notice that this is the contrapositive of the statement in <acroref type="exercise" acro="NM.T31" />.  With a singular coefficient matrix, either there are no solutions, or infinitely many solutions, depending on the choice of the vector of constants ($\vect{b}$).</p>

<example acro="PSHS" index="particular solutions">
<title>Particular solutions, homogeneous solutions, Archetype D</title>
</equation>
is obviously a solution of the homogeneous system since it is written as a linear combination of the vectors describing the null space of the coefficient matrix (or as a check, you could just evaluate the equations in the homogeneous system with $\vect{z}_2$).</p>

-<p>Here's another view of this theorem, in the context of this example.  Grab two new solutions of the original system of equations, say
+<p>Here is another view of this theorem, in the context of this example.  Grab two new solutions of the original system of equations, say
<alignmath>
<![CDATA[\vect{y}_4=\colvector{11\\0\\-3\\-1}&&]]>
\vect{y}_5=\colvector{-4\\2\\4\\2}
<![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
-The system is consistent (no leading one in column 6, <acroref type="theorem" acro="RCLS" />). $x_2$ and $x_4$ are the free variables.  Now apply <acroref type="theorem" acro="VFSLS" /> directly, or follow the three-step process of <acroref type="example" acro="VFS" />, <acroref type="example" acro="VFSAD" />, <acroref type="example" acro="VFSAI" />, or <acroref type="example" acro="VFSAL" /> to obtain
+The system is consistent (no pivot column in column 6, <acroref type="theorem" acro="RCLS" />). $x_2$ and $x_4$ are the free variables.  Now apply <acroref type="theorem" acro="VFSLS" /> directly, or follow the three-step process of <acroref type="example" acro="VFS" />, <acroref type="example" acro="VFSAD" />, <acroref type="example" acro="VFSAI" />, or <acroref type="example" acro="VFSAL" /> to obtain
<equation>
\colvector{x_1\\x_2\\x_3\\x_4\\x_5}
=
<![CDATA[ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
-The system is consistent (no leading one in column 10, <acroref type="theorem" acro="RCLS" />).   $F=\set{3,\,4,\,6,\,9,\,10}$, so the free variables are $x_3,\,x_4,\,x_6$ and $x_9$.  Now apply <acroref type="theorem" acro="VFSLS" /> directly, or follow the three-step process of <acroref type="example" acro="VFS" />, <acroref type="example" acro="VFSAD" />, <acroref type="example" acro="VFSAI" />, or <acroref type="example" acro="VFSAL" /> to obtain the solution set
+The system is consistent (no pivot column in column 10, <acroref type="theorem" acro="RCLS" />).   $F=\set{3,\,4,\,6,\,9,\,10}$, so the free variables are $x_3,\,x_4,\,x_6$ and $x_9$.  Now apply <acroref type="theorem" acro="VFSLS" /> directly, or follow the three-step process of <acroref type="example" acro="VFS" />, <acroref type="example" acro="VFSAD" />, <acroref type="example" acro="VFSAI" />, or <acroref type="example" acro="VFSAL" /> to obtain the solution set
<equation>
S=\setparts{
\colvector{ 6\\ -1\\ 0\\ 0\\ 3\\ 0\\ 0\\ -2\\0}+