# fcla / src / section-NM.xml

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 
Nonsingular Matrices

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (, , ) that these matrices are especially important.

Nonsingular Matrices

Our theorems will now establish connections between systems of equations (homogeneous or otherwise), augmented matrices representing those systems, coefficient matrices, constant vectors, the reduced row-echelon form of matrices (augmented and coefficient) and solution sets. Be very careful in your reading, writing and speaking about systems of equations, matrices and sets of vectors. A system of equations is not a matrix, a matrix is not a solution set, and a solution set is not a system of equations. Now would be a great time to review the discussion about speaking and writing mathematics in .

Square Matrix

A matrix with $m$ rows and $n$ columns is square if $m=n$. In this case, we say the matrix has size $n$. To emphasize the situation when a matrix is not square, we will call it rectangular.

We can now present one of the central definitions of linear algebra.

Nonsingular Matrix

Suppose $A$ is a square matrix. Suppose further that the solution set to the homogeneous linear system of equations $\linearsystem{A}{\zerovector}$ is $\set{\zerovector}$, in other words, the system has only the trivial solution. Then we say that $A$ is a nonsingular matrix. Otherwise we say $A$ is a singular matrix.

We can investigate whether any square matrix is nonsingular or not, no matter if the matrix is derived somehow from a system of equations or if it is simply a matrix. The definition says that to perform this investigation we must construct a very specific system of equations (homogeneous, with the matrix as the coefficient matrix) and look at its solution set. We will have theorems in this section that connect nonsingular matrices with systems of equations, creating more opportunities for confusion. Convince yourself now of two observations, (1) we can decide nonsingularity for any square matrix, and (2) the determination of nonsingularity involves the solution set for a certain homogeneous system of equations.

Notice that it makes no sense to call a system of equations nonsingular (the term does not apply to a system of equations), nor does it make any sense to call a $5\times 7$ matrix singular (the matrix is not square).

A singular matrix, Archetype A

shows that the coefficient matrix derived from , specifically the $3\times 3$ matrix, A= is a singular matrix since there are nontrivial solutions to the homogeneous system $\homosystem{A}$.

A nonsingular matrix, Archetype B

shows that the coefficient matrix derived from , specifically the $3\times 3$ matrix, B= is a nonsingular matrix since the homogeneous system, $\homosystem{B}$, has only the trivial solution.

Notice that we will not discuss as being a singular or nonsingular coefficient matrix since the matrix is not square.

Nonsingular Matrix Being nonsingular is an important matrix property, and in such cases Sage contains commands that quickly and easily determine if the mathematical object does, or does not, have the property. The names of these types of methods universally begin with .is_, and these might be referred to as predicates or queries.. In the Sage notebook, define a simple matrix A, and then in a cell type A.is_, followed by pressing the tab key rather than evaluating the cell. You will get a list of numerous properties that you can investigate for the matrix A. (This will not work as advertised with the Sage cell server.) The other convention is to name these properties in a positive way, so the relevant command for nonsingular matrices is .is_singular(). We will redo and . Note the use of not in the last compute cell. A = matrix(QQ, [[1, -1, 2], [2, 1, 1], [1, 1, 0]]) A.is_singular() True B = matrix(QQ, [[-7, -6, -12], [ 5, 5, 7], [ 1, 0, 4]]) B.is_singular() False not(B.is_singular()) True

The next theorem combines with our main computational technique (row reducing a matrix) to make it easy to recognize a nonsingular matrix. But first a definition.

Identity Matrix

The $m\times m$ identity matrix, $I_m$, is defined by \begin{cases} \end{cases} \quad\quad 1\leq i,\,j\leq m

Identity Matrix $I_m$
An identity matrix

The $4\times 4$ identity matrix is I_4= \begin{bmatrix} \end{bmatrix}.

Notice that an identity matrix is square, and in reduced row-echelon form. So in particular, if we were to arrive at the identity matrix while bringing a matrix to reduced row-echelon form, then it would have all of the diagonal entries circled as leading 1's.

Identity Matrix It is straightforward to create an identity matrix in Sage. Just specify the number system and the number of rows (which will equal the number of columns, so you do not specify that since it would be redundant). The number system can be left out, but the result will have entries from the integers (ZZ), which in this course is unlikely to be what you really want. id5 = identity_matrix(QQ, 5) id5 [1 0 0 0 0] [0 1 0 0 0] [0 0 1 0 0] [0 0 0 1 0] [0 0 0 0 1] id4 = identity_matrix(4) id4.base_ring() Integer Ring Notice that we do not use the now-familiar dot notation to create an identity matrix. What would we use the dot notation on anyway? For these reasons we call the identity_matrix() function a constructor, since it builds something from scratch, in this case a very particular type of matrix. We mentioned above that an identity matrix is in reduced row-echelon form. What happens if we try to row-reduce a matrix that is already in reduced row-echelon form? By the uniqueness of the result, there should be no change. The following code illustrates this. Notice that = is used to assign an object to a new name, while == is used to test equality of two objects. I frequently make the mistake of forgetting the second equal sign when I mean to test equality. id50 = identity_matrix(QQ, 50) id50 == id50.rref() True Nonsingular Matrices Row Reduce to the Identity matrix

Suppose that $A$ is a square matrix and $B$ is a row-equivalent matrix in reduced row-echelon form. Then $A$ is nonsingular if and only if $B$ is the identity matrix.

Suppose $B$ is the identity matrix. When the augmented matrix $\augmented{A}{\zerovector}$ is row-reduced, the result is $\augmented{B}{\zerovector}=\augmented{I_n}{\zerovector}$. The number of nonzero rows is equal to the number of variables in the linear system of equations $\linearsystem{A}{\zerovector}$, so $n=r$ and gives $n-r=0$ free variables. Thus, the homogeneous system $\homosystem{A}$ has just one solution, which must be the trivial solution. This is exactly the definition of a nonsingular matrix ().

If $A$ is nonsingular, then the homogeneous system $\linearsystem{A}{\zerovector}$ has a unique solution, and has no free variables in the description of the solution set. The homogeneous system is consistent () so applies and tells us there are $n-r$ free variables. Thus, $n-r=0$, and so $n=r$. So $B$ has $n$ pivot columns among its total of $n$ columns. This is enough to force $B$ to be the $n\times n$ identity matrix $I_n$ (see ).

Notice that since this theorem is an equivalence it will always allow us to determine if a matrix is either nonsingular or singular. Here are two examples of this, continuing our study of Archetype A and Archetype B.

Singular matrix, row-reduced

The coefficient matrix for is A= which when row-reduced becomes the row-equivalent matrix B=.

Since this matrix is not the $3\times 3$ identity matrix, tells us that $A$ is a singular matrix.

Nonsingular matrix, row-reduced

The coefficient matrix for is A= which when row-reduced becomes the row-equivalent matrix B=

Since this matrix is the $3\times 3$ identity matrix, tells us that $A$ is a nonsingular matrix.

Null Space of a Nonsingular Matrix

Nonsingular matrices and their null spaces are intimately related, as the next two examples illustrate.

Null space of a singular matrix

Given the coefficient matrix from , A= the null space is the set of solutions to the homogeneous system of equations $\homosystem{A}$ has a solution set and null space constructed in as \nsp{A}=\setparts{\colvector{-x_3\\x_3\\x_3}}{x_3\in\complex{\null}}

Null space of a nonsingular matrix

Given the coefficient matrix from , A= the solution set to the homogeneous system $\homosystem{A}$ is constructed in and contains only the trivial solution, so the null space has only a single element, \nsp{A}=\set{\colvector{0\\0\\0}}

These two examples illustrate the next theorem, which is another equivalence.

Nonsingular Matrices have Trivial Null Spaces

Suppose that $A$ is a square matrix. Then $A$ is nonsingular if and only if the null space of $A$, $\nsp{A}$, contains only the zero vector, $\nsp{A}=\set{\zerovector}$.

The null space of a square matrix, $A$, is equal to the set of solutions to the homogeneous system, $\homosystem{A}$. A matrix is nonsingular if and only if the set of solutions to the homogeneous system, $\linearsystem{A}{\zerovector}$, has only a trivial solution. These two observations may be chained together to construct the two proofs necessary for each half of this theorem.

The next theorem pulls a lot of big ideas together. tells us that we can learn much about solutions to a system of linear equations with a square coefficient matrix by just examining a similar homogeneous system.

Nonsingular Matrices and Unique Solutions

Suppose that $A$ is a square matrix. $A$ is a nonsingular matrix if and only if the system $\linearsystem{A}{\vect{b}}$ has a unique solution for every choice of the constant vector $\vect{b}$.

The hypothesis for this half of the proof is that the system $\linearsystem{A}{\vect{b}}$ has a unique solution for every choice of the constant vector $\vect{b}$. We will make a very specific choice for $\vect{b}$: $\vect{b}=\zerovector$. Then we know that the system $\linearsystem{A}{\zerovector}$ has a unique solution. But this is precisely the definition of what it means for $A$ to be nonsingular (). That almost seems too easy! Notice that we have not used the full power of our hypothesis, but there is nothing that says we must use a hypothesis to its fullest.

We assume that $A$ is nonsingular of size $n\times n$, so we know there is a sequence of row operations that will convert $A$ into the identity matrix $I_n$ (). Form the augmented matrix $A^\prime=\augmented{A}{\vect{b}}$ and apply this same sequence of row operations to $A^\prime$. The result will be the matrix $B^\prime=\augmented{I_n}{\vect{c}}$, which is in reduced row-echelon form with $r=n$. Then the augmented matrix $B^\prime$ represents the (extremely simple) system of equations $x_i=\vectorentry{\vect{c}}{i}$, $1\leq i\leq n$. The vector $\vect{c}$ is clearly a solution, so the system is consistent (). With a consistent system, we use to count free variables. We find that there are $n-r=n-n=0$ free variables, and so we therefore know that the solution is unique. (This half of the proof was suggested by Asa Scherer.)

This theorem helps to explain part of our interest in nonsingular matrices. If a matrix is nonsingular, then no matter what vector of constants we pair it with, using the matrix as the coefficient matrix will always yield a linear system of equations with a solution, and the solution is unique. To determine if a matrix has this property (non-singularity) it is enough to just solve one linear system, the homogeneous system with the matrix as coefficient matrix and the zero vector as the vector of constants (or any other vector of constants, see ).

Formulating the negation of the second part of this theorem is a good exercise. A singular matrix has the property that for some value of the vector $\vect{b}$, the system $\linearsystem{A}{\vect{b}}$ does not have a unique solution (which means that it has no solution or infinitely many solutions). We will be able to say more about this case later (see the discussion following ).

Square matrices that are nonsingular have a long list of interesting properties, which we will start to catalog in the following, recurring, theorem. Of course, singular matrices will then have all of the opposite properties. The following theorem is a list of equivalences.

We want to understand just what is involved with understanding and proving a theorem that says several conditions are equivalent. So have a look at before studying the first in this series of theorems.

Nonsingular Matrix Equivalences, Round 1

Suppose that $A$ is a square matrix. The following are equivalent.

1. $A$ is nonsingular.
2. $A$ row-reduces to the identity matrix.
3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$.
4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$.

That $A$ is nonsingular is equivalent to each of the subsequent statements by, in turn, , and . So the statement of this theorem is just a convenient way to organize all these results.

Nonsingular Matrix Equivalences, Round 1 Sage will create random matrices and vectors, sometimes with various properties. These can be very useful for quick experiments, and they are also useful for illustrating that theorems hold for any object satisfying the hypotheses of the theorem. But this will never replace a proof.

We will illustrate using Sage. We will use a variant of the random_matrix() constructor that uses the algorithm='unimodular' keyword. We will have to wait for before we can give a full explanation, but for now, understand that this command will always create a square matrix that is nonsingular. Also realize that there are square nonsingular matrices which will never be the output of this command. In other words, this command creates elements of just a subset of all possible nonsingular matrices.

So we are using random matrices below to illustrate properties predicted by . Execute the first command to create a random nonsingular matrix, and notice that we only have to mark the output of A as random for our automated testing process. After a few runs, notice that you can also edit the value of n to create matrices of different sizes. With a matrix A defined, run the next three cells, which by each always produce True as their output, no matter what value A has, so long as \verb"A" is nonsingular. Read the code and try to determine exactly how they correspond to the parts of the theorem (some commentary along these lines follows). n = 6 A = random_matrix(QQ, n, algorithm='unimodular') A # random [ 1 -4 8 14 8 55] [ 4 -15 29 50 30 203] [ -4 17 -34 -59 -35 -235] [ -1 3 -8 -16 -5 -48] [ -5 16 -33 -66 -16 -195] [ 1 -2 2 7 -2 10] A.rref() == identity_matrix(QQ, n) True nsp = A.right_kernel(basis='pivot') nsp.list() == [zero_vector(QQ, n)] True b = random_vector(QQ, n) aug = A.augment(b) aug.pivots() == tuple(range(n)) True The only portion of these commands that may be unfamilar is the last one. The command range(n) is incredibly useful, as it will create a list of the integers from 0 up to, but not including, n. (We saw this command briefly in .) So, for example, range(3) == [0,1,2] is True. Pivots are returned as a tuple which is very much like a list, except we cannot change the contents. We can see the difference by the way the tuple prints with parentheses ((,)) rather than brackets ([,]). We can convert a list to a tuple with the tuple() command, in order to make the comparison succeed.

How do we tell if the reduced row-echelon form of the augmented matrix of a system of equations represents a system with a unique solution? First, the system must be consistent, which by means the last column is not a pivot column. Then with a consistent system we need to insure there are no free variables. This happens if and only if the remaining columns are all pivot columns, according to , thus the test used in the last compute cell.

Finally, you may have wondered why we refer to a matrix as nonsingular when it creates systems of equations with single solutions ()! I've wondered the same thing. We'll have an opportunity to address this when we get to . Can you wait that long?

1. What is the definition of a nonsingular matrix?
2. What is the easiest way to recognize if a square matrix is nonsingular or not?
3. Suppose we have a system of equations and its coefficient matrix is nonsingular. What can you say about the solution set for this system?

In Exercises C30C33 determine if the matrix is nonsingular or singular. Give reasons for your answer.

\begin{bmatrix} \end{bmatrix} The matrix row-reduces to \begin{bmatrix} \end{bmatrix} which is the $4\times 4$ identity matrix. By the original matrix must be nonsingular. \begin{bmatrix} \end{bmatrix} Row-reducing the matrix yields, \begin{bmatrix} \end{bmatrix} Since this is not the $4\times 4$ identity matrix, tells us the matrix is singular. % \begin{bmatrix} \end{bmatrix} The matrix is not square, so neither term is applicable. See , which is stated for just square matrices. \begin{bmatrix} \end{bmatrix} tells us we can answer this question by simply row-reducing the matrix. Doing this we obtain, \begin{bmatrix} \end{bmatrix} Since the reduced row-echelon form of the matrix is the $4\times 4$ identity matrix $I_4$, we know that $B$ is nonsingular.
Each of the archetypes below is a system of equations with a square coefficient matrix, or is itself a square matrix. Determine if these matrices are nonsingular, or singular. Comment on the null space of each matrix.

, , , ,
Find the null space of the matrix $E$ below. \begin{bmatrix} \end{bmatrix} We form the augmented matrix of the homogeneous system $\homosystem{E}$ and row-reduce the matrix, \begin{bmatrix} \end{bmatrix} \begin{bmatrix} \end{bmatrix} We knew ahead of time that this system would be consistent (), but we can now see there are $n-r=4-2=2$ free variables, namely $x_3$ and $x_4$ since $F=\set{3,4,5}$ (). Based on this analysis, we can rearrange the equations associated with each nonzero row of the reduced row-echelon form into an expression for the lone dependent variable as a function of the free variables. We arrive at the solution set to this homogeneous system, which is the null space of the matrix by , \nsp{E}=\setparts{\colvector{-2x_3+6x_4\\5x_3-3x_4\\x_3\\x_4}}{x_3,\,x_4\in\complexes} Let $A$ be the coefficient matrix of the system of equations below. Is $A$ nonsingular or singular? Explain what you could infer about the solution set for the system based only on what you have learned about $A$ being singular or nonsingular. We row-reduce the coefficient matrix of the system of equations, \begin{bmatrix} \end{bmatrix} \begin{bmatrix} \end{bmatrix} Since the row-reduced version of the coefficient matrix is the $4\times 4$ identity matrix, $I_4$ ( by, we know the coefficient matrix is nonsingular. According to we know that the system is guaranteed to have a unique solution, based only on the extra information that the coefficient matrix is nonsingular.

For Exercises M51M52 say as much as possible about each system's solution set. Be sure to make it clear which theorems you are using to reach your conclusions.

6 equations in 6 variables, singular coefficient matrix. tells us that the coefficient matrix will not row-reduce to the identity matrix. So if we were to row-reduce the augmented matrix of this system of equations, we would not get a unique solution. So by the remaining possibilities are no solutions, or infinitely many. A system with a nonsingular coefficient matrix, not homogeneous. Any system with a nonsingular coefficient matrix will have a unique solution by . If the system is not homogeneous, the solution cannot be the zero vector ().
Suppose that $A$ is a square matrix, and $B$ is a matrix in reduced row-echelon form that is row-equivalent to $A$. Prove that if $A$ is singular, then the last row of $B$ is a zero row. Let $n$ denote the size of the square matrix $A$. By the hypothesis that $A$ is singular implies that $B$ is not the identity matrix $I_n$. If $B$ has $n$ pivot columns, then it would have to be $I_n$, so $B$ must have fewer than $n$ pivot columns. But the number of nonzero rows in $B$ ($r$) is equal to the number of pivot columns as well. So the $n$ rows of $B$ have fewer than $n$ nonzero rows, and $B$ must contain at least one zero row. By , this row must be at the bottom of $B$.

A proof can also be formulated by first forming the contrapositive of the statement () and proving this statement.
Suppose that $A$ is a square matrix. Using the definition of reduced row-echelon form () carefully, give a proof of the following equivalence: Every column of $A$ is a pivot column if and only if $A$ is the identity matrix (). Suppose that $A$ is a nonsingular matrix and $A$ is row-equivalent to the matrix $B$. Prove that $B$ is nonsingular. Since $A$ and $B$ are row-equivalent matrices, consideration of the three row operations () will show that the augmented matrices, $\augmented{A}{\zerovector}$ and $\augmented{B}{\zerovector}$, are also row-equivalent matrices. This says that the two homogeneous systems, $\homosystem{A}$ and $\homosystem{B}$ are equivalent systems. $\homosystem{A}$ has only the zero vector as a solution (), thus $\homosystem{B}$ has only the zero vector as a solution. Finally, by , we see that $B$ is nonsingular.

Form a similar theorem replacing nonsingular by singular in both the hypothesis and the conclusion. Prove this new theorem with an approach just like the one above, and/or employ the result about nonsingular matrices in a proof by contradiction.
Suppose that $A$ is a square matrix of size $n\times n$ and that we know there is a single vector $\vect{b}\in\complex{n}$ such that the system $\linearsystem{A}{\vect{b}}$ has a unique solution. Prove that $A$ is a nonsingular matrix. (Notice that this is very similar to , but is not exactly the same.) Let $B$ be the reduced row-echelon form of the augmented matrix $\augmented{A}{\vect{b}}$. Because the system $\linearsystem{A}{\vect{b}}$ is consistent, we know by that the last column of $B$ is not a pivot column. Suppose now that $rn$. Then by the system would have infinitely many solutions. From this contradiction, we see that $r=n$ and the first $n$ columns of $B$ are each pivot columns. Then the sequence of row operations that converts $\augmented{A}{\vect{b}}$ to $B$ will also convert $A$ to $I_n$. Applying we conclude that $A$ is nonsingular. Provide an alternative for the second half of the proof of , without appealing to properties of the reduced row-echelon form of the coefficient matrix. In other words, prove that if $A$ is nonsingular, then $\linearsystem{A}{\vect{b}}$ has a unique solution for every choice of the constant vector $\vect{b}$. Construct this proof without using or . We assume $A$ is nonsingular, and try to solve the system $\linearsystem{A}{\vect{b}}$ without making any assumptions about $\vect{b}$. To do this we will begin by constructing a new homogeneous linear system of equations that looks very much like the original. Suppose $A$ has size $n$ (why must it be square?) and write the original system as, Form the new, homogeneous system in $n$ equations with $n+1$ variables, by adding a new variable $y$, whose coefficients are the negatives of the constant terms, Since this is a homogeneous system with more variables than equations ($m=n+1>n$), says that the system has infinitely many solutions. We will choose one of these solutions, any one of these solutions, so long as it is not the trivial solution. Write this solution as We know that at least one value of the $c_i$ is nonzero, but we will now show that in particular $c_{n+1}\neq 0$. We do this using a proof by contradiction (). So suppose the $c_i$ form a solution as described, and in addition that $c_{n+1}=0$. Then we can write the $i$-th equation of system $(**)$ as, which becomes Since this is true for each $i$, we have that $x_1=c_1,\,x_2=c_2,\,x_3=c_3,\ldots,\,x_n=c_n$ is a solution to the homogeneous system $\homosystem{A}$ formed with a nonsingular coefficient matrix. This means that the only possible solution is the trivial solution, so $c_1=0,\,c_2=0,\,c_3=0,\,\ldots,\,c_n=0$. So, assuming simply that $c_{n+1}=0$, we conclude that all of the $c_i$ are zero. But this contradicts our choice of the $c_i$ as not being the trivial solution to the system $(**)$. So $c_{n+1}\neq 0$.

We now propose and verify a solution to the original system $(*)$. Set Notice how it was necessary that we know that $c_{n+1}\neq 0$ for this step to succeed. Now, evaluate the $i$-th equation of system $(*)$ with this proposed solution, and recognize in the third line that $c_1$ through $c_{n+1}$ appear as if they were substituted into the left-hand side of the $i$-th equation of system $(**)$, Since this equation is true for every $i$, we have found a solution to system $(*)$. To finish, we still need to establish that this solution is unique.

With one solution in hand, we will entertain the possibility of a second solution. So assume system $(*)$ has two solutions, Then, This is the $i$-th equation of the homogeneous system $\homosystem{A}$ evaluated with $x_j=d_j-e_j$, $1\leq j\leq n$. Since $A$ is nonsingular, we must conclude that this solution is the trivial solution, and so $0=d_j-e_j$, $1\leq j\leq n$. That is, $d_j=e_j$ for all $j$ and the two solutions are identical, meaning any solution to $(*)$ is unique.

\medskip Notice that the proposed solution ($x_i=\frac{c_i}{c_{n+1}}$) appeared in this proof with no motivation whatsoever. This is just fine in a proof. A proof should convince you that a theorem is true. It is your job to read the proof and be convinced of every assertion. Questions like Where did that come from? or How would I think of that? have no bearing on the validity of the proof.