Commits

rbeezer committed ea959d3

Minor edits, Sections MO, MM, MISLE

Comments (0)

Files changed (4)

 ~~~~~~~~~~~~~~~~
 Edit: Minor edits in Sections WILA, SSLE, RREF, TSS, HSE, NM
 Edit: Minor edits in Section VO, LC, SS, LI, LDS, O
+Edit: Minor edits in Section MO, MM, MISLE
 Edit: Extended slightly the conclusion of Theorem HSC
 Edit: Sage CNIP, removed obsolete discussion (Michael DuBois)
 Edit: Stronger finish to proof of Theorem TT (Anna Dovzhik)

src/section-MISLE.xml

 \colvector{-3\\5\\2}
 </equation></p>
 
-<p>So with the help and assistance of $B$ we have been able to determine a solution to the system represented by $A\vect{x}=\vect{b}$ through judicious use of matrix multiplication.  We know by <acroref type="theorem" acro="NMUS" /> that since the coefficient matrix in this example is nonsingular, there would be a unique solution, no matter what the choice of $\vect{b}$.  The derivation above amplifies this result, since we were <em>forced</em> to conclude that $\vect{x}=B\vect{b}$ and the solution couldn't be anything else.  You should notice that this argument would hold for any particular choice of $\vect{b}$.</p>
+<p>So with the help and assistance of $B$ we have been able to determine a solution to the system represented by $A\vect{x}=\vect{b}$ through judicious use of matrix multiplication.  We know by <acroref type="theorem" acro="NMUS" /> that since the coefficient matrix in this example is nonsingular, there would be a unique solution, no matter what the choice of $\vect{b}$.  The derivation above amplifies this result, since we were <em>forced</em> to conclude that $\vect{x}=B\vect{b}$ and the solution could not be anything else.  You should notice that this argument would hold for any particular choice of $\vect{b}$.</p>
 
 </example>
 
 </equation>
 which allows us to recognize the inconsistency by <acroref type="theorem" acro="RCLS" />.</p>
 
-<p>So the assumption of $A$'s inverse leads to a logical inconsistency (the system can't be both consistent and inconsistent), so our assumption is false.  $A$ is not invertible.</p>
+<p>So the assumption of $A$'s inverse leads to a logical inconsistency (the system ca not be both consistent and inconsistent), so our assumption is false.  $A$ is not invertible.</p>
 
-<p>It's possible this example is less than satisfying.  Just where did that particular choice of the vector $\vect{b}$ come from anyway?  Stay tuned for an application of the future <acroref type="theorem" acro="CSCS" /> in <acroref type="example" acro="CSAA" />.</p>
+<p>It is possible this example is less than satisfying.  Just where did that particular choice of the vector $\vect{b}$ come from anyway?  Stay tuned for an application of the future <acroref type="theorem" acro="CSCS" /> in <acroref type="example" acro="CSAA" />.</p>
 
 </example>
 
 </alignmath>
 </p>
 
-<p>Since the matrix $B$ is what we are trying to compute, we can view each column, $\vect{B}_i$, as a column vector of unknowns.  Then we have five systems of equations to solve, each with 5 equations in 5 variables.  Notice that all 5 of these systems have the same coefficient matrix.  We'll now solve each system in turn,
+<p>Since the matrix $B$ is what we are trying to compute, we can view each column, $\vect{B}_i$, as a column vector of unknowns.  Then we have five systems of equations to solve, each with 5 equations in 5 variables.  Notice that all 5 of these systems have the same coefficient matrix.  We will now solve each system in turn,
 <!--  Keep silly blank line to not confuse translators -->
 <!--  and make a non-null firtst grouping -->
 <alignmath>
 
 </example>
 
-<p>Notice how the five systems of equations in the preceding example were all solved by <em>exactly</em> the same sequence of row operations.  Wouldn't it be nice to avoid this obvious duplication of effort?  Our main theorem for this section follows, and it mimics this previous example, while also avoiding all the overhead.</p>
+<p>Notice how the five systems of equations in the preceding example were all solved by <em>exactly</em> the same sequence of row operations.  Would it not be nice to avoid this obvious duplication of effort?  Our main theorem for this section follows, and it mimics this previous example, while also avoiding all the overhead.</p>
 
 <theorem acro="CINM" index="matrix inverse!computation">
 <title>Computing the Inverse of a Nonsingular Matrix</title>
 </proof>
 </theorem>
 
-<p>We have to be just a bit careful here about both what this theorem says and what it doesn't say.  If $A$ is a nonsingular matrix, then we are guaranteed a matrix $B$ such that $AB=I_n$, and the proof gives us a process for constructing $B$.   However, the definition of the inverse of a matrix (<acroref type="definition" acro="MI" />) requires that $BA=I_n$ also.  So at this juncture we must compute the matrix product in the <q>opposite</q> order before we claim $B$ as the inverse of $A$.  However, we'll soon see that this is <em>always</em> the case, in <acroref type="theorem" acro="OSIS" />, so the title of this theorem is not inaccurate.</p>
+<p>We have to be just a bit careful here about both what this theorem says and what it does not say.  If $A$ is a nonsingular matrix, then we are guaranteed a matrix $B$ such that $AB=I_n$, and the proof gives us a process for constructing $B$.   However, the definition of the inverse of a matrix (<acroref type="definition" acro="MI" />) requires that $BA=I_n$ also.  So at this juncture we must compute the matrix product in the <q>opposite</q> order before we claim $B$ as the inverse of $A$.  However, we will soon see that this is <em>always</em> the case, in <acroref type="theorem" acro="OSIS" />, so the title of this theorem is not inaccurate.</p>
 
 <p>What if $A$ is singular?  At this point we only know that <acroref type="theorem" acro="CINM" /> cannot be applied.  The question of $A$'s inverse is still open.  (But see <acroref type="theorem" acro="NI" /> in the next section.)</p>
 
-<p>We'll finish by computing the inverse for the coefficient matrix of <acroref type="archetype" acro="B" />, the one we just pulled from a hat in <acroref type="example" acro="SABMI" />.  There are more examples in the Archetypes (<miscref type="archetype" text="Archetypes" />) to practice with, though notice that it is silly to ask for the inverse of a rectangular matrix (the sizes aren't right) and not every square matrix has an inverse (remember <acroref type="example" acro="MWIAA" />?).</p>
+<p>We will finish by computing the inverse for the coefficient matrix of <acroref type="archetype" acro="B" />, the one we just pulled from a hat in <acroref type="example" acro="SABMI" />.  There are more examples in the Archetypes (<miscref type="archetype" text="Archetypes" />) to practice with, though notice that it is silly to ask for the inverse of a rectangular matrix (the sizes are not right) and not every square matrix has an inverse (remember <acroref type="example" acro="MWIAA" />?).</p>
 
 <example acro="CMIAB" index="Archetype B!inverse">
 <title>Computing a matrix inverse, Archetype B</title>

src/section-MM.xml

 <![CDATA[3x_1+x_2+x_4-3x_5&=0\\]]>
 <![CDATA[-2x_1+7x_2-5x_3+2x_4+2x_5&=-3]]>
 </alignmath>
-has coefficient matrix
-<equation>
-A=
+has coefficient matrix and vector of constants
+<alignmath>
+A&amp;=
 \begin{bmatrix}
 <![CDATA[2 & 4 & -3 & 5 & 1\\]]>
 <![CDATA[3 & 1 & 0 & 1 & -3\\]]>
 <![CDATA[-2 & 7 & -5 & 2 & 2]]>
 \end{bmatrix}
-</equation>
-and vector of constants
-<equation>
-\vect{b}=\colvector{9\\0\\-3}
-</equation>
+&amp;\vect{b}&amp;=\colvector{9\\0\\-3}
+</alignmath>
 and so will be described compactly by the vector equation $A\vect{x}=\vect{b}$.
 </p>
 
 
 <p>If a speadsheet were used to make these computations, a row of weights would be entered somewhere near the table of data and the formulas in the spreadsheet would effect a matrix-vector product.  This example is meant to illustrate how <q>linear</q> computations (addition, multiplication) can be organized as a matrix-vector product.</p>
 
-<p>Another example would be the matrix of numerical scores on examinations and exercises for students in a class.  The rows would correspond to students and the columns to exams and assignments.  The instructor could then assign weights to the different exams and assignments, and via a matrix-vector product, compute a single score for each student.</p>
+<p>Another example would be the matrix of numerical scores on examinations and exercises for students in a class.  The rows would be indexed by students and the columns would be indexed by exams and assignments.  The instructor could then assign weights to the different exams and assignments, and via a matrix-vector product, compute a single score for each student.</p>
 
 </example>
 
 </statement>
 
 <proof>
-<p>We are assuming $A\vect{x}=B\vect{x}$ for all $\vect{x}\in\complex{n}$, so we can employ this equality for <em>any</em> choice of the vector $\vect{x}$.  However, we'll limit our use of this equality to the standard unit vectors, $\vect{e}_j$, $1\leq j\leq n$ (<acroref type="definition" acro="SUV" />).  For all $1\leq j\leq n$, $1\leq i\leq m$,
+<p>We are assuming $A\vect{x}=B\vect{x}$ for all $\vect{x}\in\complex{n}$, so we can employ this equality for <em>any</em> choice of the vector $\vect{x}$.  However, we will limit our use of this equality to the standard unit vectors, $\vect{e}_j$, $1\leq j\leq n$ (<acroref type="definition" acro="SUV" />).  For all $1\leq j\leq n$, $1\leq i\leq m$,
 <alignmath>
 <![CDATA[&\matrixentry{A}{ij}\\]]>
 <![CDATA[&=]]>
 
 </example>
 
-<p>Is this the definition of matrix multiplication you expected?  Perhaps our previous operations for matrices caused you to think that we might multiply two matrices of the <em>same</em> size, <em>entry-by-entry</em>?  Notice that our current definition uses matrices of different sizes (though the number of columns in the first must equal the number of rows in the second), and the result is of a third size.  Notice too in the previous example that we cannot even consider the product $BA$, since the sizes of the two matrices in this order aren't right.</p>
+<p>Is this the definition of matrix multiplication you expected?  Perhaps our previous operations for matrices caused you to think that we might multiply two matrices of the <em>same</em> size, <em>entry-by-entry</em>?  Notice that our current definition uses matrices of different sizes (though the number of columns in the first must equal the number of rows in the second), and the result is of a third size.  Notice too in the previous example that we cannot even consider the product $BA$, since the sizes of the two matrices in this order are not right.</p>
 
-<p>But it gets weirder than that.  Many of your old ideas about <q>multiplication</q> won't apply to matrix multiplication, but some still will.  So make no assumptions, and don't do anything until you have a theorem that says you can.  Even if the sizes are right, matrix multiplication is not commutative <mdash /> order matters.</p>
+<p>But it gets weirder than that.  Many of your old ideas about <q>multiplication</q> will not apply to matrix multiplication, but some still will.  So make no assumptions, and do not do anything until you have a theorem that says you can.  Even if the sizes are right, matrix multiplication is not commutative <mdash /> order matters.</p>
 
 <example acro="MMNC" index="matrix multiplication!noncommutative">
 <title>Matrix multiplication is not commutative</title>
 <subsection acro="MMEE">
 <title>Matrix Multiplication, Entry-by-Entry</title>
 
-<p>While certain <q>natural</q> properties of multiplication don't hold, many more do.  In the next subsection, we'll state and prove the relevant theorems.  But first, we need a theorem that provides an alternate means of multiplying two matrices.  In many texts, this would be given as the <em>definition</em> of matrix multiplication.  We prefer to turn it around and have the following formula as a consequence of our definition.  It will prove useful for proofs of matrix equality, where we need to examine products of matrices, entry-by-entry.</p>
+<p>While certain <q>natural</q> properties of multiplication do not hold, many more do.  In the next subsection, we will state and prove the relevant theorems.  But first, we need a theorem that provides an alternate means of multiplying two matrices.  In many texts, this would be given as the <em>definition</em> of matrix multiplication.  We prefer to turn it around and have the following formula as a consequence of our definition.  It will prove useful for proofs of matrix equality, where we need to examine products of matrices, entry-by-entry.</p>
 
 <theorem acro="EMP" index="matrix multiplication!entry-by-entry">
 <title>Entries of Matrix Products</title>
 <![CDATA[=&(0)(2)+(-4)(3)+(1)(2)+(2)(-1)+(3)(3)=-3]]>
 </alignmath></p>
 
-<p>Notice how there are 5 terms in the sum, since 5 is the common dimension of the two matrices (column count for $A$, row count for $B$).  In the conclusion of <acroref type="theorem" acro="EMP" />, it would be the index $k$ that would run from 1 to 5 in this computation.   Here's a bit more practice.</p>
+<p>Notice how there are 5 terms in the sum, since 5 is the common dimension of the two matrices (column count for $A$, row count for $B$).  In the conclusion of <acroref type="theorem" acro="EMP" />, it would be the index $k$ that would run from 1 to 5 in this computation.   Here is a bit more practice.</p>
 
 <p>The entry of third row, first column:
 <alignmath>
 conjugation (<acroref type="theorem" acro="MMCC" />),
 and
 the transpose (<acroref type="definition" acro="TM" />).
-Whew!  Here we go.  These are great proofs to practice with, so try to concoct the proofs before reading them, they'll get progressively more complicated as we go.</p>
+Whew!  Here we go.  These are great proofs to practice with, so try to concoct the proofs before reading them, they will get progressively more complicated as we go.</p>
 
 <theorem acro="MMZM" index="matrix multiplication!zero matrix">
 <title>Matrix Multiplication and the Zero Matrix</title>
 </statement>
 
 <proof>
-<p>We'll prove (1) and leave (2) to you.  Entry-by-entry, for $1\leq i\leq m$, $1\leq j\leq p$,
+<p>We will prove (1) and leave (2) to you.  Entry-by-entry, for $1\leq i\leq m$, $1\leq j\leq p$,
 <alignmath>
 \matrixentry{A\zeromatrix_{n\times p}}{ij}
 <![CDATA[&=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{\zeromatrix_{n\times p}}{kj}]]>
 </statement>
 
 <proof>
-<p>Again, we'll prove (1) and leave (2) to you.  Entry-by-entry,    For $1\leq i\leq m$, $1\leq j\leq n$,
+<p>Again, we will prove (1) and leave (2) to you.  Entry-by-entry,    For $1\leq i\leq m$, $1\leq j\leq n$,
 <alignmath>
 <![CDATA[\matrixentry{AI_n}{ij}=&]]>
 \sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{I_n}{kj}
 </statement>
 
 <proof>
-<p>We'll do (1), you do (2).  Entry-by-entry, for $1\leq i\leq m$, $1\leq j\leq p$,
+<p>We will do (1), you do (2).  Entry-by-entry, for $1\leq i\leq m$, $1\leq j\leq p$,
 <alignmath>
 \matrixentry{A(B+C)}{ij}
 <![CDATA[&=]]>
 </statement>
 
 <proof>
-<p>These are equalities of matrices.  We'll do the first one, the second is similar and will be good practice for you.    For $1\leq i\leq m$, $1\leq j\leq p$,
+<p>These are equalities of matrices.  We will do the first one, the second is similar and will be good practice for you.    For $1\leq i\leq m$, $1\leq j\leq p$,
 <alignmath>
 \matrixentry{\alpha(AB)}{ij}
 <![CDATA[&=\alpha\matrixentry{AB}{ij}&&]]><acroref type="definition" acro="MSM" />\\
 </statement>
 
 <proof>
-<p>A matrix equality, so we'll go entry-by-entry, no surprise there.    For $1\leq i\leq m$, $1\leq j\leq s$,
+<p>A matrix equality, so we will go entry-by-entry, no surprise there.    For $1\leq i\leq m$, $1\leq j\leq s$,
 <alignmath>
 \matrixentry{A(BD)}{ij}
 <![CDATA[&=\sum_{k=1}^{n}\matrixentry{A}{ik}\matrixentry{BD}{kj}]]>
 </proof>
 </theorem>
 
-<p>Another theorem in this style, and it's a good one.  If you've been practicing with the previous proofs you should be able to do this one yourself.</p>
+<p>Another theorem in this style, and it is a good one.  If you've been practicing with the previous proofs you should be able to do this one yourself.</p>
 
 <theorem acro="MMT" index="matrix multiplication!transposes">
 <title>Matrix Multiplication and Transposes</title>
 </statement>
 
 <proof>
-<p>This theorem may be surprising but if we check the sizes of the matrices involved, then maybe it will not seem so far-fetched.  First, $AB$ has size $m\times p$, so its transpose has size $p\times m$.  The product of $\transpose{B}$ with $\transpose{A}$ is a $p\times n$ matrix times an $n\times m$ matrix, also resulting in a $p\times m$ matrix.  So at least our objects are compatible for equality (and would not be, in general, if we didn't reverse the order of the matrix multiplication).</p>
+<p>This theorem may be surprising but if we check the sizes of the matrices involved, then maybe it will not seem so far-fetched.  First, $AB$ has size $m\times p$, so its transpose has size $p\times m$.  The product of $\transpose{B}$ with $\transpose{A}$ is a $p\times n$ matrix times an $n\times m$ matrix, also resulting in a $p\times m$ matrix.  So at least our objects are compatible for equality (and would not be, in general, if we did not reverse the order of the matrix multiplication).</p>
 
 <p>Here we go again, entry-by-entry.  For $1\leq i\leq m$, $1\leq j\leq p$,
 <alignmath>
 
 <p>This theorem seems odd at first glance, since we have to switch the order of $A$ and $B$.  But if we simply consider the sizes of the matrices involved, we can see that the switch is necessary for this reason alone.  That the individual entries of the products then come along to be equal is a bonus.</p>
 
-<p>As the adjoint of a matrix is a composition of a conjugate and a transpose, its interaction with matrix multiplication is similar to that of a transpose.  Here's the last of our long list of basic properties of matrix multiplication.</p>
+<p>As the adjoint of a matrix is a composition of a conjugate and a transpose, its interaction with matrix multiplication is similar to that of a transpose.  Here is the last of our long list of basic properties of matrix multiplication.</p>
 
 <theorem acro="MMAD" index="matrix multiplication!adjoints">
 <title>Matrix Multiplication and Adjoints</title>
 
 <p>Notice how none of these proofs above relied on writing out huge general matrices with lots of ellipses (<q><ellipsis /></q>) and trying to formulate the equalities a whole matrix at a time.  This messy business is a <q>proof technique</q> to be avoided at all costs.  Notice too how the proof of <acroref type="theorem" acro="MMAD" /> does not use an entry-by-entry approach, but simply builds on previous results about matrix multiplication's interaction with conjugation and transposes.</p>
 
-<p>These theorems, along with <acroref type="theorem" acro="VSPM" /> and the other results in <acroref type="section" acro="MO" />, give you the <q>rules</q> for how matrices interact with the various operations we have defined on matrices (addition, scalar multiplication, matrix multiplication, conjugation, transposes and adjoints).  Use them and use them often.  But don't try to do anything with a matrix that you don't have a rule for.  Together, we would informally call all these operations, and the attendant theorems, <q>the algebra of matrices.</q>  Notice, too, that every column vector is just a $n\times 1$ matrix, so these theorems apply to column vectors also.  Finally, these results, taken as a whole, may make us feel that the definition of matrix multiplication is not so unnatural.</p>
+<p>These theorems, along with <acroref type="theorem" acro="VSPM" /> and the other results in <acroref type="section" acro="MO" />, give you the <q>rules</q> for how matrices interact with the various operations we have defined on matrices (addition, scalar multiplication, matrix multiplication, conjugation, transposes and adjoints).  Use them and use them often.  But do not try to do anything with a matrix that you do not have a rule for.  Together, we would informally call all these operations, and the attendant theorems, <q>the algebra of matrices.</q>  Notice, too, that every column vector is just a $n\times 1$ matrix, so these theorems apply to column vectors also.  Finally, these results, taken as a whole, may make us feel that the definition of matrix multiplication is not so unnatural.</p>
 
 <sageadvice acro="PMM" index="matrix multiplication, properties">
 <title>Properties of Matrix Multiplication</title>
 </proof>
 </theorem>
 
-<p>So, informally, Hermitian matrices are those that can be tossed around from one side of an inner product to the other with reckless abandon.  We'll see later what this buys us.</p>
+<p>So, informally, Hermitian matrices are those that can be tossed around from one side of an inner product to the other with reckless abandon.  We will see later what this buys us.</p>
 
 </subsection>
 
 <exercise type="T" number="23" rough="Theorem MMSMM, part (2)">
 <problem contributor="robertbeezer">Prove the second part of <acroref type="theorem" acro="MMSMM" />.
 </problem>
-<solution contributor="robertbeezer">We'll run the proof entry-by-entry.
+<solution contributor="robertbeezer">We will run the proof entry-by-entry.
 <alignmath>
 <![CDATA[\matrixentry{\alpha(AB)}{ij}=&]]>
 <![CDATA[\alpha\matrixentry{AB}{ij}&&]]><acroref type="definition" acro="MSM" />\\

src/section-MO.xml

 </notation>
 </definition>
 
-<p>So matrix addition takes two matrices of the same size and combines them (in a natural way!) to create a new matrix of the same size.  Perhaps this is the <q>obvious</q> thing to do, but it doesn't relieve us from the obligation to state it carefully.</p>
+<p>So matrix addition takes two matrices of the same size and combines them (in a natural way!) to create a new matrix of the same size.  Perhaps this is the <q>obvious</q> thing to do, but it does not relieve us from the obligation to state it carefully.</p>
 
 <example acro="MA" index="matrix addition">
 <title>Addition of two matrices in $M_{23}$</title>
 </statement>
 
 <proof>
-<p>While some of these properties seem very obvious, they all require proof.  However, the proofs are not very interesting, and border on tedious. We'll prove one version of distributivity very carefully, and you can test your proof-building skills on some of the others.  We'll give our new notation for matrix entries a workout here.  Compare the style of the proofs here with those given for vectors in <acroref type="theorem" acro="VSPCV" /> <mdash /> while the objects here are more complicated, our notation makes the proofs cleaner.</p>
+<p>While some of these properties seem very obvious, they all require proof.  However, the proofs are not very interesting, and border on tedious. We will prove one version of distributivity very carefully, and you can test your proof-building skills on some of the others.  We will give our new notation for matrix entries a workout here.  Compare the style of the proofs here with those given for vectors in <acroref type="theorem" acro="VSPCV" /> <mdash /> while the objects here are more complicated, our notation makes the proofs cleaner.</p>
 
 <p>To prove <acroref type="property" acro="DSAM" />,  $(\alpha+\beta)A=\alpha A+\beta A$, we need to establish the equality of two matrices (see <acroref type="technique" acro="GS" />).  <acroref type="definition" acro="ME" /> says we need to establish the equality of their entries, one-by-one.  How do we do this, when we do not even know how many entries the two matrices might have?  This is where the notation for matrix entries, given in <acroref type="definition" acro="M" />, comes into play.  Ready?  Here we go.</p>
 
 </alignmath>
 </p>
 
-<p>There are several things to notice here.  (1)  Each equals sign is an equality of numbers.  (2) The two ends of the equation, being true for any $i$ and $j$, allow us to conclude the equality of the matrices by <acroref type="definition" acro="ME" />.  (3)  There are several plus signs, and several instances of juxtaposition.  Identify each one, and state exactly what operation is being represented by each.</p>
+<p>There are several things to notice here.  (1)  Each equals sign is an equality of scalars (numbers).  (2) The two ends of the equation, being true for any $i$ and $j$, allow us to conclude the equality of the matrices by <acroref type="definition" acro="ME" />.  (3)  There are several plus signs, and several instances of juxtaposition.  Identify each one, and state exactly what operation is being represented by each.</p>
 
 </proof>
 </theorem>
 
 </example>
 
-<p>You might have noticed that <acroref type="definition" acro="SYM" /> did not specify the size of the matrix $A$, as has been our custom.  That's because it wasn't necessary.  An alternative would have been to state the definition just for square matrices, but this is the substance of the next proof.</p>
+<p>You might have noticed that <acroref type="definition" acro="SYM" /> did not specify the size of the matrix $A$, as has been our custom.  That is because it was not necessary.  An alternative would have been to state the definition just for square matrices, but this is the substance of the next proof.</p>
 
 <p>Before reading the next proof, we want to offer you some advice about how to become more proficient at constructing proofs.  Perhaps you can apply this advice to the next theorem.  Have a peek at <acroref type="technique" acro="P" /> now.</p>
 
 </statement>
 
 <proof>
-<p>We start by specifying $A$'s size, without assuming it is square, since we are trying to <em>prove</em> that, so we can't also assume it.  Suppose $A$ is an $m\times n$ matrix.  Because $A$ is symmetric, we know by <acroref type="definition" acro="SM" /> that $A=\transpose{A}$.  So, in particular, <acroref type="definition" acro="ME" /> requires that $A$ and $\transpose{A}$ must have the same size.  The size of $\transpose{A}$ is $n\times m$.  Because $A$ has $m$ rows and $\transpose{A}$ has $n$ rows, we conclude that $m=n$, and hence $A$ must be square by <acroref type="definition" acro="SQM" />.</p>
+<p>We start by specifying $A$'s size, without assuming it is square, since we are trying to <em>prove</em> that, so we cannot also assume it.  Suppose $A$ is an $m\times n$ matrix.  Because $A$ is symmetric, we know by <acroref type="definition" acro="SM" /> that $A=\transpose{A}$.  So, in particular, <acroref type="definition" acro="ME" /> requires that $A$ and $\transpose{A}$ must have the same size.  The size of $\transpose{A}$ is $n\times m$.  Because $A$ has $m$ rows and $\transpose{A}$ has $n$ rows, we conclude that $m=n$, and hence $A$ must be square by <acroref type="definition" acro="SQM" />.</p>
 
 </proof>
 </theorem>
 <exercise type="M" number="20" rough="Definitions">
 <problem contributor="robertbeezer">Suppose $S=\set{B_1,\,B_2,\,B_3,\,\ldots,\,B_p}$ is a set of matrices from $M_{mn}$.  Formulate appropriate definitions for the following terms and give an example of the use of each.
 <ol><li> A linear combination of elements of $S$.
-</li><li> A relation of linear dependence on $S$, both trivial and non-trivial.
+</li><li> A relation of linear dependence on $S$, both trivial and nontrivial.
 </li><li> $S$ is a linearly independent set.
 </li><li> $\spn{S}$.
 </li></ol>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.