Source

fcla / src / section-S.xml

   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
<?xml version="1.0" encoding="UTF-8" ?>
<section acro="S">
<title>Subspaces</title>

<!-- %%%%%%%%%% -->
<!-- % -->
<!-- %  Section S -->
<!-- %  Subspaces -->
<!-- % -->
<!-- %%%%%%%%%% -->
<introduction>
<p>A subspace is a vector space that is contained within another vector space.  So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space.  We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.</p>

</introduction>

<subsection acro="S">
<title>Subspaces</title>

<p>Here's the principal definition for this section.</p>

<definition acro="S" index="subspace">
<title>Subspace</title>
<p>Suppose that $V$ and $W$ are two vector spaces that have identical definitions of vector addition and scalar multiplication, and that $W$ is a subset of $V$, $W\subseteq V$.  Then $W$ is a <define>subspace</define> of $V$.</p>

</definition>

<p>Lets look at an example of a vector space inside another vector space.</p>

<example acro="SC3" index="subspace!verification">
<title>A subspace of $\complex{3}$</title>

<p>We know that $\complex{3}$ is a vector space (<acroref type="example" acro="VSCV" />).  Consider the subset,
<equation>
W=\setparts{\colvector{x_1\\x_2\\x_3}}{2x_1-5x_2+7x_3=0}
</equation></p>

<p>It is clear that $W\subseteq\complex{3}$, since the objects in $W$ are column vectors of size 3.  But is $W$ a vector space?  Does it satisfy the ten properties of <acroref type="definition" acro="VS" /> when we use the same operations?  That is the main question.</p>

<p>Suppose $\vect{x}=\colvector{x_1\\x_2\\x_3}$ and $\vect{y}=\colvector{y_1\\y_2\\y_3}$ are vectors from $W$.  Then we know that these vectors cannot be totally arbitrary, they must have gained membership in $W$ by virtue of meeting the membership test.  For example, we know that $\vect{x}$ must satisfy $2x_1-5x_2+7x_3=0$ while $\vect{y}$ must satisfy $2y_1-5y_2+7y_3=0$.  Our first property (<acroref type="property" acro="AC" />) asks the question, is $\vect{x}+\vect{y}\in W$?  When our set of vectors was $\complex{3}$, this was an easy question to answer.  Now it is not so obvious.  Notice first that
<equation>
\vect{x}+\vect{y}=\colvector{x_1\\x_2\\x_3}+\colvector{y_1\\y_2\\y_3}=
\colvector{x_1+y_1\\x_2+y_2\\x_3+y_3}
</equation>
and we can test this vector for membership in $W$ as follows.  Because $\vect{x}\in W$ we know $2x_1-5x_2+7x_3=0$ and because $\vect{y}\in W$ we know $2y_1-5y_2+7y_3=0$.  Therefore,
<alignmath>
2(x_1+y_1)-5(x_2+y_2)+7(x_3+y_3)
<![CDATA[&=2x_1+2y_1-5x_2-5y_2+7x_3+7y_3\\]]>
<![CDATA[&=(2x_1-5x_2+7x_3)+(2y_1-5y_2+7y_3)\\]]>
<![CDATA[&=0 + 0\\]]>
<![CDATA[&=0]]>
</alignmath>
and by this computation we see that $\vect{x}+\vect{y}\in W$.  One property down, nine to go.</p>

<p>If $\alpha$ is a scalar and $\vect{x}\in W$, is it always true that $\alpha\vect{x}\in W$?  This is what we need to establish <acroref type="property" acro="SC" />.  Again, the answer is not as obvious as it was when our set of vectors was all of $\complex{3}$.  Let's see.  First, note that because $\vect{x}\in W$ we know $2x_1-5x_2+7x_3=0$.  Therefore,
<equation>
\alpha\vect{x}=\alpha\colvector{x_1\\x_2\\x_3}=\colvector{\alpha x_1\\\alpha x_2\\\alpha x_3}
</equation>
and we can test this vector for membership in $W$.  First, note that because $\vect{x}\in W$ we know $2x_1-5x_2+7x_3=0$.  Therefore,
<alignmath>
2(\alpha x_1)-5(\alpha x_2)+7(\alpha x_3)
<![CDATA[&=\alpha(2x_1-5x_2+7x_3)\\]]>
<![CDATA[&=\alpha 0\\]]>
<![CDATA[&=0]]>
</alignmath>
and we see that indeed $\alpha\vect{x}\in W$.  Always.</p>

<p>If $W$ has a zero vector, it will be unique (<acroref type="theorem" acro="ZVU" />).  The zero vector for $\complex{3}$ should also perform the required duties when added to elements of $W$.  So the likely candidate for a zero vector in $W$ is the same zero vector that we know $\complex{3}$ has.  You can check that $\zerovector=\colvector{0\\0\\0}$ is a zero vector in $W$ too (<acroref type="property" acro="Z" />).</p>

<p>With a zero vector, we can now ask about additive inverses (<acroref type="property" acro="AI" />).  As you might suspect, the natural candidate for an additive inverse in $W$ is the same as the additive inverse from $\complex{3}$.  However, we must insure that these additive inverses actually are elements of $W$.  Given $\vect{x}\in W$, is $\vect{-x}\in W$?
<equation>
\vect{-x}=\colvector{-x_1\\-x_2\\-x_3}
</equation>
and we can test this vector for membership in $W$.  As before, because $\vect{x}\in W$ we know $2x_1-5x_2+7x_3=0$.
<alignmath>
2(-x_1)-5(-x_2)+7(-x_3)
<![CDATA[&=-(2x_1-5x_2+7x_3)\\]]>
<![CDATA[&=-0\\]]>
<![CDATA[&=0]]>
</alignmath>
and we now believe that $\vect{-x}\in W$.</p>

<p>Is the vector addition in $W$ commutative (<acroref type="property" acro="C" />)?  Is $\vect{x}+\vect{y}=\vect{y}+\vect{x}$?  Of course!  Nothing about restricting the scope of our set of vectors will prevent the operation from still being commutative.  Indeed, the remaining five properties are unaffected by the transition to a smaller set of vectors, and so remain true.  That was convenient.</p>

<p>So $W$ satisfies all ten properties, is therefore a vector space, and thus earns the title of being a subspace of $\complex{3}$.</p>

</example>

</subsection>

<subsection acro="TS">
<title>Testing Subspaces</title>

<p>In <acroref type="example" acro="SC3" /> we proceeded through all ten of the vector space properties before believing that a subset was a subspace.  But six of the properties were easy to prove, and we can lean on some of the properties of the vector space (the superset) to make the other four easier.  Here is a theorem that will make it easier to test if a subset is a vector space.  A shortcut if there ever was one.</p>

<theorem acro="TSS" index="subspace!testing">
<title>Testing Subsets for Subspaces</title>
<statement>
<p>Suppose that $V$ is a vector space and $W$ is a subset of $V$, $W\subseteq V$.  Endow $W$ with the same operations as $V$.  Then $W$ is a subspace if and only if three conditions are met
<ol><li> $W$ is non-empty, $W\neq\emptyset$.
</li><li> If $\vect{x}\in W$ and $\vect{y}\in W$, then $\vect{x}+\vect{y}\in W$.
</li><li> If $\alpha\in\complex{\null}$ and $\vect{x}\in W$, then $\alpha\vect{x}\in W$.
</li></ol>
</p>

</statement>

<proof>
<p><implyforward />  We have the hypothesis that $W$ is a subspace, so by <acroref type="property" acro="Z" /> we know that $W$ contains a zero vector.  This is enough to show that $W\neq\emptyset$.  Also, since $W$ is a vector space it satisfies the additive and scalar multiplication closure properties (<acroref type="property" acro="AC" />, <acroref type="property" acro="SC" />), and so exactly meets the second and third conditions.  If that was easy, the other direction might require a bit more work.</p>

<p><implyreverse /> We have three properties for our hypothesis, and from this we should conclude that $W$ has the ten defining properties of a vector space.  The second and third conditions of our hypothesis are exactly <acroref type="property" acro="AC" /> and <acroref type="property" acro="SC" />.
Our hypothesis that $V$ is a vector space implies that
<acroref type="property" acro="C" />,
<acroref type="property" acro="AA" />,
<acroref type="property" acro="SMA" />,
<acroref type="property" acro="DVA" />,
<acroref type="property" acro="DSA" /> and
<acroref type="property" acro="O" />
all hold.  They continue to be true for vectors from $W$ since passing to a subset, and keeping the operation the same, leaves their statements unchanged.  Eight down, two to go.</p>

<p>Suppose $\vect{x}\in W$.  Then by the third part of our hypothesis (scalar closure), we know that $(-1)\vect{x}\in W$.  By <acroref type="theorem" acro="AISM" /> $(-1)\vect{x}=\vect{-x}$, so together these statements show us that $\vect{-x}\in W$.  $\vect{-x}$ is the additive inverse of $\vect{x}$ in $V$, but will continue in this role when viewed as element of the subset $W$.  So every element of $W$ has an additive inverse that is an element of $W$ and <acroref type="property" acro="AI" /> is completed.  Just one property left.</p>

<p>While we have implicitly discussed the zero vector in the previous paragraph, we need to be certain that the zero vector (of $V$) really lives in $W$.   Since $W$ is non-empty, we can choose some vector $\vect{z}\in W$.  Then by the argument in the previous paragraph, we know $\vect{-z}\in W$.  Now by <acroref type="property" acro="AI" /> for $V$ and then by the second part of our hypothesis (additive closure) we see that
<equation>
\zerovector=\vect{z}+(\vect{-z})\in W
</equation>
</p>

<p>So $W$ contains the zero vector from $V$.  Since this vector performs the required duties of a zero vector in $V$, it will continue in that role as an element of $W$. This gives us, <acroref type="property" acro="Z" />, the final property of the ten required.  (<contributorname code="sarahfellez" /> contributed to this proof.)</p>

</proof>
</theorem>

<p>So just three conditions, plus being a subset of a known vector space, gets us all ten properties.  Fabulous!
This theorem can be paraphrased by saying that a subspace is <q>a non-empty subset (of a vector space) that is closed under vector addition and scalar multiplication.</q></p>

<p>You might want to go back and rework <acroref type="example" acro="SC3" /> in light of this result, perhaps seeing where we can now economize or where the work done in the example mirrored the proof and where it did not.  We will press on and apply this theorem in a slightly more abstract setting.</p>

<example acro="SP4" index="subspace!in $P_4$">
<title>A subspace of $P_4$</title>

<p>$P_4$ is the vector space of polynomials with degree at most $4$ (<acroref type="example" acro="VSP" />).  Define a subset $W$ as
<equation>
W=\setparts{p(x)}{p\in P_4,\ p(2)=0}
</equation>
so $W$ is the collection of those polynomials (with degree 4 or less) whose graphs  cross the $x$-axis at $x=2$.  Whenever we encounter a new set it is a good idea to gain a better understanding of the set by finding a few elements in the set, and a few outside it.  For example $x^2-x-2\in W$, while $x^4+x^3-7\not\in W$.</p>

<p>Is $W$ nonempty?  Yes, $x-2\in W$.</p>

<p>Additive closure?  Suppose $p\in W$ and $q\in W$.  Is $p+q\in W$?  $p$ and $q$ are not totally arbitrary, we know that $p(2)=0$ and $q(2)=0$.  Then we can check $p+q$ for membership in $W$,
<alignmath>
<![CDATA[(p+q)(2)&=p(2)+q(2)&&\text{Addition in }P_4\\]]>
<![CDATA[&=0+0&&p\in W,\,q\in W\\]]>
<![CDATA[&=0]]>
</alignmath>
so we see that $p+q$ qualifies for membership in $W$.</p>

<p>Scalar multiplication closure?  Suppose that $\alpha\in\complex{\null}$ and $p\in W$.  Then we know that $p(2)=0$.  Testing $\alpha p$ for membership,
<alignmath>
<![CDATA[(\alpha p)(2)&=\alpha p(2)&&\text{Scalar multiplication in }P_4\\]]>
<![CDATA[&=\alpha 0&&p\in W\\]]>
<![CDATA[&=0]]>
</alignmath>
so $\alpha p\in W$.</p>

<p>We have shown that $W$ meets the three conditions of <acroref type="theorem" acro="TSS" /> and so qualifies as a subspace of $P_4$.  Notice that by <acroref type="definition" acro="S" /> we now know that $W$ is also a vector space.  So all the properties of a vector space (<acroref type="definition" acro="VS" />) and the theorems of <acroref type="section" acro="VS" /> apply in full.</p>

</example>

<p>Much of the power of <acroref type="theorem" acro="TSS" /> is that we can easily establish new vector spaces if we can locate them as subsets of other vector spaces, such as the ones presented in <acroref type="subsection" acro="VS.EVS" />.</p>

<p>It can be as instructive to consider some subsets that are <em>not</em> subspaces.  Since <acroref type="theorem" acro="TSS" /> is an equivalence (see <acroref type="technique" acro="E" />) we can be assured that a subset is not a subspace if it violates one of the three conditions, and in any example of interest this will not be the <q>non-empty</q> condition.  However, since a subspace has to be a vector space in its own right, we can also search for a violation of any one of the ten defining properties in <acroref type="definition" acro="VS" /> or any inherent property of a vector space, such as those given by the basic theorems of <acroref type="subsection" acro="VS.VSP" />.  Notice also that a violation need only be for a specific vector or pair of vectors.</p>

<example acro="NSC2Z" index="subspace!not, zero vector">
<title>A non-subspace in $\complex{2}$, zero vector</title>

<p>Consider the subset $W$ below as a candidate for being a subspace of $\complex{2}$
<equation>
W=\setparts{\colvector{x_1\\x_2}}{3x_1-5x_2=12}
</equation>
</p>

<p>The zero vector of $\complex{2}$, $\zerovector=\colvector{0\\0}$ will need to be the zero vector in $W$ also.  However, $\zerovector\not\in W$ since $3(0)-5(0)=0\neq 12$.  So $W$ has no zero vector and fails <acroref type="property" acro="Z" /> of <acroref type="definition" acro="VS" />.  This subspace also fails to be closed under addition and scalar multiplication.  Can you find examples of this?</p>

</example>

<example acro="NSC2A" index="subspace!not, additive closure">
<title>A non-subspace in $\complex{2}$, additive closure</title>

<p>Consider the subset $X$ below as a candidate for being a subspace of $\complex{2}$
<equation>
X=\setparts{\colvector{x_1\\x_2}}{x_1x_2=0}
</equation>
</p>

<p>You can check that $\zerovector\in X$, so the approach of the last example will not get us anywhere.  However, notice that $\vect{x}=\colvector{1\\0}\in X$ and $\vect{y}=\colvector{0\\1}\in X$.  Yet
<equation>
\vect{x}+\vect{y}=\colvector{1\\0}+\colvector{0\\1}=\colvector{1\\1}\not\in X
</equation>
</p>

<p>So $X$ fails the additive closure requirement of either <acroref type="property" acro="AC" /> or <acroref type="theorem" acro="TSS" />, and is therefore not a subspace.</p>

</example>

<example acro="NSC2S" index="subspace!not, scalar closure">
<title>A non-subspace in $\complex{2}$, scalar multiplication closure</title>

<p>Consider the subset $Y$ below as a candidate for being a subspace of $\complex{2}$
<equation>
Y=\setparts{\colvector{x_1\\x_2}}{x_1\in{\mathbb Z},\,x_2\in{\mathbb Z}}
</equation>
${\mathbb Z}$ is the set of integers, so we are only allowing <q>whole numbers</q> as the constituents of our vectors.  Now, $\zerovector\in Y$, and additive closure also holds (can you prove these claims?).  So we will have to try something different.  Note that $\alpha = \frac{1}{2}\in\complex{\null}$ and $\colvector{2\\3}\in Y$, but
<equation>
\alpha\vect{x}=\frac{1}{2}\colvector{2\\3}=\colvector{1\\\frac{3}{2}}\not\in Y
</equation>
So $Y$ fails the scalar multiplication closure requirement of either <acroref type="property" acro="SC" /> or <acroref type="theorem" acro="TSS" />, and is therefore not a subspace.</p>

</example>

<p>There are two examples of subspaces that are trivial.  Suppose that $V$ is any vector space.  Then $V$ is a subset of itself and is a vector space.  By <acroref type="definition" acro="S" />, $V$ qualifies as a subspace of itself.  The set containing just the zero vector $Z=\set{\zerovector}$ is also a subspace as can be seen by applying <acroref type="theorem" acro="TSS" /> or by simple modifications of the techniques hinted at in <acroref type="example" acro="VSS" />.  Since these subspaces are so obvious (and therefore not too interesting) we will refer to them as being trivial.</p>

<definition acro="TS" index="subspace!trivial">
<title>Trivial Subspaces</title>
<p>Given the vector space $V$, the subspaces $V$ and $\set{\zerovector}$ are each called a <define>trivial subspace</define>.</p>

</definition>

<p>We can also use <acroref type="theorem" acro="TSS" /> to prove more general statements about subspaces, as illustrated in the next theorem.</p>

<theorem acro="NSMS" index="null space!subspace">
<title>Null Space of a Matrix is a Subspace</title>
<statement>
<p>Suppose that $A$ is an $m\times n$ matrix.  Then the null space of $A$, $\nsp{A}$, is a subspace of $\complex{n}$.</p>

</statement>

<proof>
<p>We will examine the three requirements of <acroref type="theorem" acro="TSS" />.  Recall that <acroref type="definition" acro="NSM" /> can be formulated as $\nsp{A}=\setparts{\vect{x}\in\complex{n}}{A\vect{x}=\zerovector}$.</p>

<p>First, $\zerovector\in\nsp{A}$, which can be inferred as a consequence of <acroref type="theorem" acro="HSC" />.  So $\nsp{A}\neq\emptyset$.</p>

<p>Second, check additive closure by supposing that $\vect{x}\in\nsp{A}$ and $\vect{y}\in\nsp{A}$.  So we know a little something about $\vect{x}$ and $\vect{y}$:  $A\vect{x}=\zerovector$ and $A\vect{y}=\zerovector$, and that is all we know.  Question:  Is $\vect{x}+\vect{y}\in\nsp{A}$?  Let's check.
<alignmath>
<![CDATA[A(\vect{x}+\vect{y})&=A\vect{x}+A\vect{y}&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
<![CDATA[&=\zerovector+\zerovector&&\vect{x}\in\nsp{A},\ \vect{y}\in\nsp{A}\\]]>
<![CDATA[&=\zerovector&&]]>\text{<acroref type="theorem" acro="VSPCV" />}
</alignmath>
So, yes, $\vect{x}+\vect{y}$ qualifies for membership in $\nsp{A}$.</p>

<p>Third, check scalar multiplication closure by supposing that $\alpha\in\complex{\null}$ and $\vect{x}\in\nsp{A}$.  So we know a little something about $\vect{x}$:  $A\vect{x}=\zerovector$, and that is all we know.  Question:  Is $\alpha\vect{x}\in\nsp{A}$?  Let's check.
<alignmath>
<![CDATA[A(\alpha\vect{x})&=\alpha(A\vect{x})&&]]>\text{<acroref type="theorem" acro="MMSMM" />}\\
<![CDATA[&=\alpha\zerovector&&\vect{x}\in\nsp{A}\\]]>
<![CDATA[&=\zerovector&&]]>\text{<acroref type="theorem" acro="ZVSM" />}
</alignmath>
So, yes, $\alpha\vect{x}$ qualifies for membership in $\nsp{A}$.</p>

<p>Having met the three conditions in <acroref type="theorem" acro="TSS" /> we can now say that the null space of a matrix is a subspace (and hence a vector space in its own right!).</p>

</proof>
</theorem>

<p>Here is an example where we can exercise <acroref type="theorem" acro="NSMS" />.</p>

<example acro="RSNS" index="subspace!as null space">
<title>Recasting a subspace as a null space</title>

<p>Consider the subset of $\complex{5}$ defined as
<equation>
W =\setparts{\colvector{x_1\\x_2\\x_3\\x_4\\x_5}}{
\begin{array}{l}
3x_1+x_2-5x_3+7x_4+x_5=0,\\
4x_1+6x_2+3x_3-6x_4-5x_5=0,\\
-2x_1+4x_2+7x_4+x_5=0
\end{array}
}
</equation></p>

<p>It is possible to show that $W$ is a subspace of $\complex{5}$ by checking the three conditions of <acroref type="theorem" acro="TSS" /> directly, but it will get tedious rather quickly.  Instead, give $W$ a fresh look and notice that it is a set of solutions to a homogeneous system of equations.  Define the matrix
<equation>
A=\begin{bmatrix}
<![CDATA[3&1&-5&7&1\\]]>
<![CDATA[4&6&3&-6&-5\\]]>
<![CDATA[-2&4&0&7&1]]>
\end{bmatrix}
</equation>
and then recognize that $W=\nsp{A}$.  By <acroref type="theorem" acro="NSMS" /> we can immediately see that $W$ is a subspace.  Boom!</p>

</example>

</subsection>

<subsection acro="TSS">
<title>The Span of a Set</title>

<p>The span of a set of column vectors got a heavy workout in <acroref type="chapter" acro="V" /> and <acroref type="chapter" acro="M" />.  The definition of the span depended only on being able to formulate linear combinations.  In any of our more general vector spaces we always have a definition of vector addition and of scalar multiplication.  So we can build linear combinations and manufacture spans.  This subsection contains two definitions that are just mild variants of definitions we have seen earlier for column vectors.  If you haven't already, compare them with <acroref type="definition" acro="LCCV" /> and  <acroref type="definition" acro="SSCV" />.</p>

<definition acro="LC" index="linear combination">
<title>Linear Combination</title>
<p>Suppose that $V$ is a vector space.
Given $n$ vectors $\vectorlist{u}{n}$ and $n$ scalars $\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_n$, their <define>linear combination</define> is the vector
<equation>
\lincombo{\alpha}{u}{n}.
</equation>
</p>

</definition>

<example acro="LCM" index="linear combination!matrices">
<title>A linear combination of matrices</title>

<p>In the vector space $M_{23}$ of $2\times 3$ matrices, we have the vectors
<alignmath>
<![CDATA[\vect{x}&=]]>
\begin{bmatrix}
<![CDATA[1&3&-2\\]]>
<![CDATA[2&0&7]]>
\end{bmatrix}
<![CDATA[&]]>
<![CDATA[\vect{y}&=]]>
\begin{bmatrix}
<![CDATA[3&-1&2\\]]>
<![CDATA[5&5&1]]>
\end{bmatrix}
<![CDATA[&]]>
<![CDATA[\vect{z}&=]]>
\begin{bmatrix}
<![CDATA[4&2&-4\\]]>
<![CDATA[1&1&1]]>
\end{bmatrix}
</alignmath>
and we can form linear combinations such as
<alignmath>
<![CDATA[2\vect{x}+4\vect{y}+(-1)\vect{z}&=]]>
2
\begin{bmatrix}
<![CDATA[1&3&-2\\]]>
<![CDATA[2&0&7]]>
\end{bmatrix}
+4
\begin{bmatrix}
<![CDATA[3&-1&2\\]]>
<![CDATA[5&5&1]]>
\end{bmatrix}
+(-1)
\begin{bmatrix}
<![CDATA[4&2&-4\\]]>
<![CDATA[1&1&1]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[2&6&-4\\]]>
<![CDATA[4&0&14]]>
\end{bmatrix}
+
\begin{bmatrix}
<![CDATA[12&-4&8\\]]>
<![CDATA[20&20&4]]>
\end{bmatrix}
+
\begin{bmatrix}
<![CDATA[-4&-2&4\\]]>
<![CDATA[-1&-1&-1]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[10&0&8\\]]>
<![CDATA[23&19&17]]>
\end{bmatrix}
<intertext>or,</intertext>
<![CDATA[4\vect{x}-2\vect{y}+3\vect{z}&=]]>
4
\begin{bmatrix}
<![CDATA[1&3&-2\\]]>
<![CDATA[2&0&7]]>
\end{bmatrix}
-2
\begin{bmatrix}
<![CDATA[3&-1&2\\]]>
<![CDATA[5&5&1]]>
\end{bmatrix}
+3
\begin{bmatrix}
<![CDATA[4&2&-4\\]]>
<![CDATA[1&1&1]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[4&12&-8\\]]>
<![CDATA[8&0&28]]>
\end{bmatrix}
+
\begin{bmatrix}
<![CDATA[-6&2&-4\\]]>
<![CDATA[-10&-10&-2]]>
\end{bmatrix}
+
\begin{bmatrix}
<![CDATA[12&6&-12\\]]>
<![CDATA[3&3&3]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[10&20&-24\\]]>
<![CDATA[1&-7&29]]>
\end{bmatrix}
</alignmath>
</p>

</example>

<p>When we realize that we can form linear combinations in any vector space, then it is natural to revisit our definition of the span of a set, since it is the set of <em>all</em> possible linear combinations of a set of vectors.</p>

<definition acro="SS" index="span">
<title>Span of a Set</title>
<p>Suppose that $V$ is a vector space.
Given a set of vectors $S=\{\vectorlist{u}{t}\}$, their <define>span</define>, $\spn{S}$, is the set of all possible linear combinations of $\vectorlist{u}{t}$.  Symbolically,
<alignmath>
<![CDATA[\spn{S}&=\setparts{\lincombo{\alpha}{u}{t}}{\alpha_i\in\complex{\null},\,1\leq i\leq t}\\]]>
<![CDATA[&=\setparts{\sum_{i=1}^{t}\alpha_i\vect{u}_i}{\alpha_i\in\complex{\null},\,1\leq i\leq t}]]>
</alignmath>
</p>

</definition>

<theorem acro="SSS" index="span!subspace">
<title>Span of a Set is a Subspace</title>
<statement>
<p>Suppose $V$ is a vector space.  Given a set of vectors $S=\{\vectorlist{u}{t}\}\subseteq V$, their span, $\spn{S}$, is a subspace.</p>

</statement>

<proof>
<p>By <acroref type="definition" acro="SS" />, the span contains linear combinations of vectors from the vector space $V$, so by repeated use of the closure properties, <acroref type="property" acro="AC" /> and <acroref type="property" acro="SC" />, $\spn{S}$ can be seen to be a subset of $V$.</p>

<p>We will then verify the three conditions of <acroref type="theorem" acro="TSS" />.  First,
<alignmath>
\zerovector
<![CDATA[&=\zerovector+\zerovector+\zerovector+\ldots+\zerovector&&]]>\text{<acroref type="property" acro="Z" /> for $V$}\\
<![CDATA[&=0\vect{u}_1+0\vect{u}_2+0\vect{u}_3+\cdots+0\vect{u}_t&&]]>\text{<acroref type="theorem" acro="ZSSM" />}
</alignmath>
</p>

<p>So we have written $\zerovector$ as a linear combination of the vectors in $S$ and by <acroref type="definition" acro="SS" />$, \zerovector\in\spn{S}$ and therefore $\spn{S}\neq\emptyset$.</p>

<p>Second, suppose $\vect{x}\in\spn{S}$ and $\vect{y}\in\spn{S}$.  Can we conclude that $\vect{x}+\vect{y}\in\spn{S}$?  What do we know about $\vect{x}$ and $\vect{y}$ by virtue of their membership in $\spn{S}$?  There must be scalars from $\complex{\null}$,
$\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_t$ and
$\beta_1,\,\beta_2,\,\beta_3,\,\ldots,\,\beta_t$ so that
<alignmath>
<![CDATA[\vect{x}&=\lincombo{\alpha}{u}{t}\\]]>
<![CDATA[\vect{y}&=\lincombo{\beta}{u}{t}]]>
</alignmath>
Then
<alignmath>
<![CDATA[\vect{x}+\vect{y}&=\lincombo{\alpha}{u}{t}\\]]>
<![CDATA[&\quad\quad+\lincombo{\beta}{u}{t}\\]]>
<![CDATA[&=\alpha_1\vect{u}_1+\beta_1\vect{u}_1+\alpha_2\vect{u}_2+\beta_2\vect{u}_2\\]]>
<![CDATA[&\quad\quad+\alpha_3\vect{u}_3+\beta_3\vect{u}_3+\cdots+\alpha_t\vect{u}_t+\beta_t\vect{u}_t&&]]>\text{<acroref type="property" acro="AA" />, <acroref type="property" acro="C" />}\\
<![CDATA[&=(\alpha_1+\beta_1)\vect{u}_1+(\alpha_2+\beta_2)\vect{u}_2\\]]>
<![CDATA[&\quad\quad+(\alpha_3+\beta_3)\vect{u}_3+\cdots+(\alpha_t+\beta_t)\vect{u}_t&&]]>\text{<acroref type="property" acro="DSA" />}
</alignmath>
Since each $\alpha_i+\beta_i$ is again a scalar from $\complex{\null}$ we have expressed the vector sum $\vect{x}+\vect{y}$ as a linear combination of the vectors from $S$, and therefore by <acroref type="definition" acro="SS" /> we can say that $\vect{x}+\vect{y}\in\spn{S}$.</p>

<p>Third, suppose $\alpha\in\complex{\null}$ and $\vect{x}\in\spn{S}$.  Can we conclude that $\alpha\vect{x}\in\spn{S}$?  What do we know about $\vect{x}$  by virtue of its membership in $\spn{S}$?  There must be scalars from $\complex{\null}$,
$\alpha_1,\,\alpha_2,\,\alpha_3,\,\ldots,\,\alpha_t$ so that
<alignmath>
<![CDATA[\vect{x}&=\lincombo{\alpha}{u}{t}\\]]>
</alignmath>
Then
<alignmath>
<![CDATA[\alpha\vect{x}&=\alpha\left(\lincombo{\alpha}{u}{t}\right)\\]]>
<![CDATA[&=\alpha(\alpha_1\vect{u}_1)+\alpha(\alpha_2\vect{u}_2)+\alpha(\alpha_3\vect{u}_3)+\cdots+\alpha(\alpha_t\vect{u}_t)&&]]>\text{<acroref type="property" acro="DVA" />}\\
<![CDATA[&=(\alpha\alpha_1)\vect{u}_1+(\alpha\alpha_2)\vect{u}_2+(\alpha\alpha_3)\vect{u}_3+\cdots+(\alpha\alpha_t)\vect{u}_t&&]]>\text{<acroref type="property" acro="SMA" />}\\
</alignmath>
Since each $\alpha\alpha_i$ is again a scalar from $\complex{\null}$ we have expressed the scalar multiple $\alpha\vect{x}$ as a linear combination of the vectors from $S$, and therefore by <acroref type="definition" acro="SS" /> we can say that $\alpha\vect{x}\in\spn{S}$.</p>

<p>With the three conditions of <acroref type="theorem" acro="TSS" /> met, we can say that $\spn{S}$ is a subspace (and so is also vector space, <acroref type="definition" acro="VS" />).
(See <acroref type="exercise" acro="SS.T20" />, <acroref type="exercise" acro="SS.T21" />, <acroref type="exercise" acro="SS.T22" />.)</p>

</proof>
</theorem>

<example acro="SSP" index="span!set of polynomials">
<title>Span of a set of polynomials</title>

<p>In <acroref type="example" acro="SP4" /> we proved that
<equation>
W=\setparts{p(x)}{p\in P_4,\ p(2)=0}
</equation>
is a subspace of $P_4$, the vector space of polynomials of degree at most 4.  Since $W$ is a vector space itself, let's construct a span within $W$.  First let
<equation>
S=\set{x^4-4x^3+5x^2-x-2,\,2x^4-3x^3-6x^2+6x+4}
</equation>
and verify that $S$ is a subset of $W$ by checking that each of these two polynomials has $x=2$ as a root.  Now, if we define $U=\spn{S}$, then <acroref type="theorem" acro="SSS" /> tells us that $U$ is a subspace of $W$.  So quite quickly we have built a chain of subspaces, $U$ inside $W$, and $W$ inside $P_4$.</p>

<p>Rather than dwell on how quickly we can build subspaces, let's try to gain a better understanding of just how the span construction creates subspaces, in the context of this example.  We can quickly build representative elements of $U$,
<equation>
3(x^4-4x^3+5x^2-x-2)+5(2x^4-3x^3-6x^2+6x+4)=13x^4-27x^3-15x^2+27x+14
</equation>
and
<equation>
(-2)(x^4-4x^3+5x^2-x-2)+8(2x^4-3x^3-6x^2+6x+4)=14x^4-16x^3-58x^2+50x+36
</equation>
and each of these polynomials must be in $W$ since it is closed under addition and scalar multiplication.  But you might check for yourself that both of these polynomials have $x=2$ as a root.</p>

<p>I can tell you that $\vect{y}=3x^4-7x^3-x^2+7x-2$ is not in $U$, but would you believe me?  A first check shows that $\vect{y}$ does have $x=2$ as a root, but that only shows that $\vect{y}\in W$.  What does $\vect{y}$ have to do to gain membership in $U=\spn{S}$?  It must be a linear combination of the vectors in $S$, $x^4-4x^3+5x^2-x-2$ and $2x^4-3x^3-6x^2+6x+4$.  So let's suppose that $\vect{y}$ is such a linear combination,
<alignmath>
\vect{y}
<![CDATA[&=3x^4-7x^3-x^2+7x-2\\]]>
<![CDATA[&=\alpha_1(x^4-4x^3+5x^2-x-2)+\alpha_2(2x^4-3x^3-6x^2+6x+4)\\]]>
<![CDATA[&=]]>
(\alpha_1+2\alpha_2)x^4+
(-4\alpha_1-3\alpha_2)x^3+
(5\alpha_1-6\alpha_2)x^2\\
<![CDATA[&\quad\quad+]]>
(-\alpha_1+6\alpha_2)x+
(-2\alpha_1+4\alpha_2)
</alignmath>
</p>

<p>Notice that operations above are done in accordance with the definition of the vector space of polynomials (<acroref type="example" acro="VSP" />).  Now, if we equate coefficients, which is the definition of equality for polynomials, then we obtain the system of five linear equations in two variables
<alignmath>
<![CDATA[\alpha_1+2\alpha_2&=3\\]]>
<![CDATA[-4\alpha_1-3\alpha_2&=-7\\]]>
<![CDATA[5\alpha_1-6\alpha_2&=-1\\]]>
<![CDATA[-\alpha_1+6\alpha_2&=7\\]]>
<![CDATA[-2\alpha_1+4\alpha_2&=-2]]>
</alignmath>
</p>

<p>Build an augmented matrix from the system and row-reduce,
<equation>
\begin{bmatrix}
<![CDATA[1 & 2 & 3\\]]>
<![CDATA[-4 & -3 & -7\\]]>
<![CDATA[5 & -6 & -1\\]]>
<![CDATA[-1 & 6 & 7\\]]>
<![CDATA[-2 & 4 & -2]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0\\]]>
<![CDATA[0 & \leading{1} & 0\\]]>
<![CDATA[0 & 0 & \leading{1}\\]]>
<![CDATA[0 & 0 & 0\\]]>
<![CDATA[0 & 0 & 0]]>
\end{bmatrix}
</equation>
</p>

<p>With a leading 1 in the final column of the row-reduced augmented matrix, <acroref type="theorem" acro="RCLS" /> tells us the system of equations is inconsistent.  Therefore, there are no scalars, $\alpha_1$ and $\alpha_2$, to establish $\vect{y}$ as a linear combination of the elements in $U$.  So  $\vect{y}\not\in U$.</p>

</example>

<p>Let's again examine membership in a span.</p>

<example acro="SM32" index="subspace!verification">
<title>A subspace of $M_{32}$</title>

<p>The set of all $3\times 2$ matrices forms a vector space when we use the operations of matrix addition (<acroref type="definition" acro="MA" />) and scalar matrix multiplication (<acroref type="definition" acro="MSM" />), as was show in <acroref type="example" acro="VSM" />.  Consider the subset
<equation>
S=\set{
\begin{bmatrix}
<![CDATA[3 & 1 \\ 4 & 2 \\ 5 & -5]]>
\end{bmatrix},\,
\begin{bmatrix}
<![CDATA[1 & 1 \\ 2 &-1 \\ 14 & -1]]>
\end{bmatrix},\,
\begin{bmatrix}
<![CDATA[3 & -1 \\ -1&2 \\ -19 & -11]]>
\end{bmatrix},\,
\begin{bmatrix}
<![CDATA[4 & 2 \\ 1 & -2 \\ 14 & -2]]>
\end{bmatrix},\,
\begin{bmatrix}
<![CDATA[3 & 1 \\ -4 & 0 \\ -17 & 7]]>
\end{bmatrix}
}
</equation>
and define a new subset of vectors $W$ in $M_{32}$ using the span (<acroref type="definition" acro="SS" />), $W=\spn{S}$.  So by <acroref type="theorem" acro="SSS" /> we know that $W$ is a subspace of $M_{32}$.  While $W$ is an infinite set, and this is a precise description, it would still be worthwhile to investigate whether or not $W$ contains certain elements.</p>

<p>First, is
<equation>
\vect{y}=\begin{bmatrix}
<![CDATA[9 & 3 \\ 7 & 3 \\ 10 & -11]]>
\end{bmatrix}
</equation>
in $W$?  To answer this, we want to determine if $\vect{y}$ can be written as a linear combination of the five matrices in $S$.  Can we find scalars, $\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5$ so that
<alignmath>
<![CDATA[&\begin{bmatrix}]]>
<![CDATA[9 & 3 \\ 7&3 \\ 10 & -11]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\alpha_1
\begin{bmatrix}
<![CDATA[3 & 1 \\ 4 & 2 \\ 5 & -5]]>
\end{bmatrix}
+\alpha_2
\begin{bmatrix}
<![CDATA[1 & 1 \\ 2 & -1 \\ 14 & -1]]>
\end{bmatrix}
+\alpha_3
\begin{bmatrix}
<![CDATA[3 & -1 \\ -1 & 2 \\ -19 & -11]]>
\end{bmatrix}
+\alpha_4
\begin{bmatrix}
<![CDATA[4 & 2 \\ 1 & -2 \\ 14 & -2]]>
\end{bmatrix}
+\alpha_5
\begin{bmatrix}
<![CDATA[3 & 1 \\ -4 & 0 \\ -17 & 7]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5 &]]>
\alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5\\
<![CDATA[4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5&]]>
2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 \\
<![CDATA[5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5&]]>
-5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5
\end{bmatrix}
</alignmath>
</p>

<p>Using our definition of matrix equality (<acroref type="definition" acro="ME" />) we can translate this statement into six equations in the five unknowns,
<alignmath>
<![CDATA[3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5& =9\\]]>
<![CDATA[\alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5& =3\\]]>
<![CDATA[4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& =7\\]]>
<![CDATA[2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 & =3\\]]>
<![CDATA[5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& =10\\]]>
<![CDATA[-5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5&=-11]]>
</alignmath>
</p>

<p>This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions.  The matrix that is row-equivalent to the augmented matrix is
<equation>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0 & 0 & \frac{5}{8} & 2\\]]>
<![CDATA[0 & \leading{1} & 0 & 0 & \frac{-19}{4} & -1\\]]>
<![CDATA[0 & 0 & \leading{1} & 0 & \frac{-7}{8} & 0\\]]>
<![CDATA[0 & 0 & 0 & \leading{1} & \frac{17}{8} & 1\\]]>
<![CDATA[0 & 0 & 0 & 0 & 0 & 0\\]]>
<![CDATA[0 & 0 & 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
</p>

<p>So we recognize that the system is consistent since there is no leading 1 in the final column (<acroref type="theorem" acro="RCLS" />), and compute $n-r=5-4=1$ free variables (<acroref type="theorem" acro="FVCS" />).  While there are infinitely many solutions, we are only in pursuit of a single solution, so let's choose the free variable $\alpha_5=0$ for simplicity's sake.  Then we easily see that $\alpha_1=2$, $\alpha_2=-1$, $\alpha_3=0$, $\alpha_4=1$.  So the scalars $\alpha_1=2$, $\alpha_2=-1$, $\alpha_3=0$, $\alpha_4=1$, $\alpha_5=0$ will provide a linear combination of the elements of $S$ that equals $\vect{y}$, as we can verify by checking,
<alignmath>
\begin{bmatrix}
<![CDATA[9 & 3 \\ 7 & 3 \\ 10 & -11]]>
\end{bmatrix}
=
2
\begin{bmatrix}
<![CDATA[3 & 1 \\ 4 & 2 \\ 5 & -5]]>
\end{bmatrix}
+(-1)
\begin{bmatrix}
<![CDATA[1 & 1 \\ 2 & -1 \\ 14 & -1]]>
\end{bmatrix}
+(1)
\begin{bmatrix}
<![CDATA[4 & 2 \\ 1 & -2 \\ 14 & -2]]>
\end{bmatrix}
</alignmath>
So with one particular linear combination in hand, we are convinced that $\vect{y}$ deserves to be a member of $W=\spn{S}$.</p>

<p>Second, is
<equation>
\vect{x}=\begin{bmatrix}
<![CDATA[2 & 1 \\ 3 & 1 \\ 4 & -2]]>
\end{bmatrix}
</equation>
in $W$?  To answer this, we want to determine if $\vect{x}$ can be written as a linear combination of the five matrices in $S$.  Can we find scalars, $\alpha_1,\,\alpha_2,\,\alpha_3,\,\alpha_4,\,\alpha_5$ so that
<alignmath>
<![CDATA[&\begin{bmatrix}]]>
<![CDATA[2 & 1 \\ 3 & 1 \\ 4 & -2]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\alpha_1
\begin{bmatrix}
<![CDATA[3 & 1 \\ 4 & 2 \\ 5 & -5]]>
\end{bmatrix}
+\alpha_2
\begin{bmatrix}
<![CDATA[1 & 1 \\ 2 & -1 \\ 14 & -1]]>
\end{bmatrix}
+\alpha_3
\begin{bmatrix}
<![CDATA[3 & -1 \\ -1 & 2 \\ -19 & -11]]>
\end{bmatrix}
+\alpha_4
\begin{bmatrix}
<![CDATA[4 & 2 \\ 1 & -2 \\ 14 & -2]]>
\end{bmatrix}
+\alpha_5
\begin{bmatrix}
<![CDATA[3 & 1 \\ -4 & 0 \\ -17 & 7]]>
\end{bmatrix}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5 &]]>
\alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5\\
<![CDATA[4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5&]]>
2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 \\
<![CDATA[5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5&]]>
-5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5
\end{bmatrix}
</alignmath>
Using our definition of matrix equality (<acroref type="definition" acro="ME" />) we can translate this statement into six equations in the five unknowns,
<alignmath>
<![CDATA[3\alpha_1 +\alpha_2 +3\alpha_3 +4\alpha_4 +3\alpha_5& =2\\]]>
<![CDATA[\alpha_1 +\alpha_2 -\alpha_3 +2\alpha_4 +\alpha_5& =1\\]]>
<![CDATA[4\alpha_1 +2\alpha_2 -\alpha_3 +\alpha_4 -4\alpha_5& =3\\]]>
<![CDATA[2\alpha_1 -\alpha_2 +2\alpha_3 -2\alpha_4 & =1\\]]>
<![CDATA[5\alpha_1 +14\alpha_2 -19\alpha_3 +14\alpha_4 -17\alpha_5& =4\\]]>
<![CDATA[-5\alpha_1 -\alpha_2 -11\alpha_3 -2\alpha_4 +7\alpha_5&=-2]]>
</alignmath>
This is a linear system of equations, which we can represent with an augmented matrix and row-reduce in search of solutions.  The matrix that is row-equivalent to the augmented matrix is
<equation>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0 & 0 & \frac{5}{8} & 0\\]]>
<![CDATA[0 & \leading{1} & 0 & 0 & -\frac{38}{8} & 0\\]]>
<![CDATA[0 & 0 & \leading{1} & 0 & -\frac{7}{8} & 0\\]]>
<![CDATA[0 & 0 & 0 & \leading{1} & -\frac{17}{8} & 0\\]]>
<![CDATA[0 & 0 & 0 & 0 & 0 & \leading{1}\\]]>
<![CDATA[0 & 0 & 0 & 0 & 0 & 0\]]>
\end{bmatrix}
</equation>
With a leading 1 in the last column <acroref type="theorem" acro="RCLS" /> tells us that the system is inconsistent.  Therefore, there are no values for the scalars that will place $\vect{x}$ in $W$, and so we conclude that $\vect{x}\not\in W$.
</p>

</example>

<p>Notice how  <acroref type="example" acro="SSP" /> and <acroref type="example" acro="SM32" /> contained questions about membership in a span, but these questions quickly became questions about solutions to a system of linear equations.  This will be a common theme going forward.</p>

</subsection>

<subsection acro="SC">
<title>Subspace Constructions</title>

<p>Several of the subsets of vectors spaces that we worked with in <acroref type="chapter" acro="M" /> are also subspaces <mdash /> they are closed under vector addition and scalar multiplication in $\complex{m}$.</p>

<theorem acro="CSMS" index="column space!subspace">
<title>Column Space of a Matrix is a Subspace</title>
<statement>
<p>Suppose that $A$ is an $m\times n$ matrix.  Then $\csp{A}$ is a subspace of $\complex{m}$.</p>

</statement>

<proof>
<p><acroref type="definition" acro="CSM" /> shows us that $\csp{A}$ is a subset of $\complex{m}$, and that it is defined as the span of a set of vectors from $\complex{m}$ (the columns of the matrix).  Since $\csp{A}$ is a span, <acroref type="theorem" acro="SSS" /> says it is a subspace.</p>

</proof>
</theorem>

<p>That was easy!  Notice that we could have used this same approach to prove that the null space is a subspace, since <acroref type="theorem" acro="SSNS" /> provided a description of the null space of a matrix as the span of a set of vectors.  However, I much prefer the current proof of <acroref type="theorem" acro="NSMS" />.  Speaking of easy, here is a very easy theorem that exposes another of our constructions as creating subspaces.</p>

<theorem acro="RSMS" index="row space!subspace">
<title>Row Space of a Matrix is a Subspace</title>
<statement>
<p>Suppose that $A$ is an $m\times n$ matrix.  Then $\rsp{A}$ is a subspace of $\complex{n}$.</p>

</statement>

<proof>
<p><acroref type="definition" acro="RSM" /> says $\rsp{A}=\csp{\transpose{A}}$, so the row space of a matrix is a column space, and every column space is a subspace by <acroref type="theorem" acro="CSMS" />.  That's enough.</p>

</proof>
</theorem>

<p>One more.</p>

<theorem acro="LNSMS" index="left null space!subspace">
<title>Left Null Space of a Matrix is a Subspace</title>
<statement>
<p>Suppose that $A$ is an $m\times n$ matrix.  Then $\lns{A}$ is a subspace of $\complex{m}$.</p>

</statement>

<proof>
<p><acroref type="definition" acro="LNS" /> says $\lns{A}=\nsp{\transpose{A}}$, so the left null space is a null space, and every null space is a subspace by <acroref type="theorem" acro="NSMS" />.  Done.</p>

</proof>
</theorem>

<p>So the span of a set of vectors, and the null space, column space, row space and left null space of a matrix are all subspaces, and hence are all vector spaces, meaning they have all the properties detailed in <acroref type="definition" acro="VS" /> and in the basic theorems presented in <acroref type="section" acro="VS" />.  We have worked with these objects as just sets in <acroref type="chapter" acro="V" /> and <acroref type="chapter" acro="M" />, but now we understand that they have much more structure.  In particular, being closed under vector addition and scalar multiplication means a subspace is also closed under linear combinations.</p>

<sageadvice acro="VS" index="vector spaces">
<title>Vector Spaces</title>
Our conception of a vector space has become much broader with the introduction of abstract vector spaces <mdash /> those whose elements (<q>vectors</q>) are not just column vectors, but polynomials, matrices, sequences, functions, etc.  Sage is able to perform computations using many different abstract and advanced ideas (such as derivatives of functions), but in the case of linear algebra, Sage will primarily stay with vector spaces of column vectors.  <acroref type="chapter" acro="R" />, and specifically, <acroref type="section" acro="VR" /> and <acroref type="sage" acro="SUTH2" /> will show us that this is not as much of a limitation as it might first appear.<br /><br />
While limited to vector spaces of column vectors, Sage has an impressive range of capabilities for vector spaces, which we will detail throughout this chapter.  You may have already noticed that many questions about abstract vector spaces can be translated into questions about column vectors.  This theme will continue, and Sage commands we already know will often be helpful in answering thse questions.<br /><br />
<acroref type="theorem" acro="SSS" />, <acroref type="theorem" acro="NSMS" />, <acroref type="theorem" acro="CSMS" />, <acroref type="theorem" acro="RSMS" /> and  <acroref type="theorem" acro="LNSMS" /> each tells us that a certain set is a subspace.  The first is the abstract version of creating a subspace via the span of a set of vectors, but still applies to column vectors as a special case.  The remaining four all begin with a matrix and create a subspace of column vectors.  We have created these spaces many times already, but notice now that the description Sage outputs explicitly says they are vector spaces, and that there are still some parts of the output that we need to explain.  Here are two reminders, first a span, and then a vector space created from a matrix.
<sage>
<input>V = QQ^4
v1 = vector(QQ, [ 1, -1, 2, 4])
v2 = vector(QQ, [-3,  0, 2, 1])
v3 = vector(QQ, [-1, -2, 6, 9])
W = V.span([v1, v2, v3])
W
</input>
<output>Vector space of degree 4 and dimension 2 over Rational Field
Basis matrix:
[    1     0  -2/3  -1/3]
[    0     1  -8/3 -13/3]
</output>
</sage>

<sage>
<input>A = matrix(QQ, [[1, 2, -4,  0, -4],
                [0, 1, -1, -1, -1],
                [3, 2, -8,  4, -8]])
W = A.column_space()
W
</input>
<output>Vector space of degree 3 and dimension 2 over Rational Field
Basis matrix:
[ 1  0  3]
[ 0  1 -4]
</output>
</sage>



</sageadvice>
</subsection>

<!--   End  s.tex -->
<readingquestions>
<ol>
<li>Summarize the three conditions that allow us to quickly test if a set is a subspace.
</li>
<li>Consider the set of vectors
<alignmath>
<![CDATA[W&=\setparts{\colvector{a\\b\\c}}{3a-2b+c=5}]]>
</alignmath>
Is the set $W$ a subspace of $\complex{3}$?  Explain your answer.
</li>
<li>Name five general constructions of sets of column vectors (subsets of $\complex{m}$) that we now know as subspaces.
</li></ol>
</readingquestions>

<exercisesubsection>

<exercise type="C" number="15" rough="Is vector in span of 4 vectors in R^3?">
<problem contributor="chrisblack">Working within the vector space $\complex{3}$, determine if
$\vect{b} = \colvector{4\\3\\1}$ is in the subspace $W$,
<equation>
W =
\spn{\set{
\colvector{3\\2\\3},
\colvector{1\\0\\3},
\colvector{1\\1\\0},
\colvector{2\\1\\3}
}}
</equation>
</problem>
<solution contributor="chrisblack">For $\vect{b}$ to be an element of $W=\spn{S}$ there must be linear combination of the vectors in $S$ that equals $\vect{b}$ (<acroref type="definition" acro="SSCV" />).  The existence of such scalars is equivalent to the linear system $\linearsystem{A}{\vect{b}}$ being consistent, where $A$ is the matrix whose columns are the vectors from $S$ (<acroref type="theorem" acro="SLSLC" />).
<alignmath>
\begin{bmatrix}
<![CDATA[3 & 1 & 1 & 2 & 4\\]]>
<![CDATA[2 & 0 & 1 & 1 & 3\\]]>
<![CDATA[3 & 3 & 0 & 3 & 1]]>
\end{bmatrix}
<![CDATA[&\rref]]>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 1/2 & 1/2 & 0\\]]>
<![CDATA[0 & \leading{1} & -1/2 & 1/2 & 0\\]]>
<![CDATA[0 & 0 & 0 & 0 & \leading{1}]]>
\end{bmatrix}
</alignmath>
So by <acroref type="theorem" acro="RCLS" /> the system is inconsistent, which indicates that $\vect{b}$ is not an element of the subspace $W$.
</solution>
</exercise>

<exercise type="C" number="16" rough="Is vector in span of 3 vectors in R^4?">
<problem contributor="chrisblack">Working within the vector space $\complex{4}$, determine if
$\vect{b} = \colvector{1\\1\\0\\1}$
is in the subspace $W$,
<equation>
W =\spn{\set{
\colvector{1\\2\\-1\\1},
\colvector{1\\0\\3\\1},
\colvector{2\\1\\1\\2}
}}
</equation>
</problem>
<solution contributor="chrisblack">For $\vect{b}$ to be an element of $W=\spn{S}$ there must be linear combination of the vectors in $S$ that equals $\vect{b}$ (<acroref type="definition" acro="SSCV" />).  The existence of such scalars is equivalent to the linear system $\linearsystem{A}{\vect{b}}$ being consistent, where $A$ is the matrix whose columns are the vectors from $S$ (<acroref type="theorem" acro="SLSLC" />).
<alignmath>
\begin{bmatrix}
<![CDATA[1 & 1 & 2 & 1\\]]>
<![CDATA[2 & 0 & 1 & 1\\]]>
<![CDATA[-1 & 3 & 1 & 0\\]]>
<![CDATA[1 & 1 & 2 & 1]]>
\end{bmatrix}
<![CDATA[&\rref]]>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0 & 1/3\\]]>
<![CDATA[0 & \leading{1} & 0 & 0 \\]]>
<![CDATA[0 & 0 & \leading{1} & 1/3 \\]]>
<![CDATA[0 & 0 & 0& 0]]>
\end{bmatrix}
</alignmath>
So by <acroref type="theorem" acro="RCLS" /> the system is consistent, which indicates that $\vect{b}$ is in the subspace $W$.
</solution>
</exercise>

<exercise type="C" number="17" rough="Is vector in span of 4 vectors in R^4?">
<problem contributor="chrisblack">Working within the vector space $\complex{4}$, determine if
$\vect{b} = \colvector{2\\1\\2\\1}$ is in the subspace $W$,
<equation>
W = \spn{\set{
\colvector{1\\2\\0\\2},
\colvector{1\\0\\3\\1},
\colvector{0\\1\\0\\2},
\colvector{1\\1\\2\\0}
}}
</equation>
</problem>
<solution contributor="chrisblack">For $\vect{b}$ to be an element of $W=\spn{S}$ there must be linear combination of the vectors in $S$ that equals $\vect{b}$ (<acroref type="definition" acro="SSCV" />).  The existence of such scalars is equivalent to the linear system $\linearsystem{A}{\vect{b}}$ being consistent, where $A$ is the matrix whose columns are the vectors from $S$ (<acroref type="theorem" acro="SLSLC" />).
<alignmath>
\begin{bmatrix}
<![CDATA[1 & 1 & 0 & 1 & 2\\]]>
<![CDATA[2 & 0 & 1 & 1 & 1\\]]>
<![CDATA[0 & 3 & 0 & 2 & 2\\]]>
<![CDATA[2 & 1 & 2 & 0 & 0]]>
\end{bmatrix}
<![CDATA[&\rref]]>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0 & 0 & 3/2\\]]>
<![CDATA[0 & \leading{1} & 0 & 0 & 1\\]]>
<![CDATA[0 & 0 & \leading{1} & 0 & -3/2 \\]]>
<![CDATA[0 & 0 & 0& \leading{1} & -1/2]]>
\end{bmatrix}
</alignmath>
So by <acroref type="theorem" acro="RCLS" /> the system is consistent, which indicates that $\vect{b}$ is in the subspace $W$.
</solution>
</exercise>

<exercise type="C" number="20" rough="Dim 3 subspace of polynomials, in span?">
<problem contributor="robertbeezer">Working within the vector space $P_3$ of polynomials of degree 3 or less, determine if $p(x)=x^3+6x+4$ is in the subspace $W$ below.
<equation>
W=\spn{\set{x^3+x^2+x,\,x^3+2x-6,\,x^2-5}}
</equation>
</problem>
<solution contributor="robertbeezer">The question is if $p$ can be written as a linear combination of the vectors in $W$.  To check this, we set $p$ equal to a linear combination and massage with the definitions of vector addition and scalar multiplication that we get with $P_3$ (<acroref type="example" acro="VSP" />)
<alignmath>
<![CDATA[p(x)&=a_1(x^3+x^2+x)+a_2(x^3+2x-6)+a_3(x^2-5)\\]]>
<![CDATA[x^3+6x+4&=(a_1+a_2)x^3+(a_1+a_3)x^2+(a_1+2a_2)x+(-6a_2-5a_3)\\]]>
</alignmath>
Equating coefficients of equal powers of $x$, we get the system of equations,
<alignmath>
<![CDATA[a_1+a_2&=1\\]]>
<![CDATA[a_1+a_3&=0\\]]>
<![CDATA[a_1+2a_2&=6\\]]>
<![CDATA[-6a_2-5a_3&=4]]>
</alignmath>
The augmented matrix of this system of equations row-reduces to
<equation>
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & 0 & 0\\]]>
<![CDATA[0 & \leading{1} & 0 & 0\\]]>
<![CDATA[0 & 0 & \leading{1} & 0\\]]>
<![CDATA[0 & 0 & 0 & \leading{1}]]>
\end{bmatrix}
</equation>
There is a leading 1 in the last column, so <acroref type="theorem" acro="RCLS" /> implies that the system is inconsistent.  So there is no way for $p$ to gain membership in $W$, so $p\not\in W$.
</solution>
</exercise>

<exercise type="C" number="21" rough="2x2 matrix, in span?  Yes.">
<problem contributor="robertbeezer">Consider the subspace
<equation>
W=\spn{\set{
\begin{bmatrix}
<![CDATA[2 & 1\\3 & -1]]>
\end{bmatrix}
,\,
\begin{bmatrix}
<![CDATA[4 & 0\\2 & 3]]>
\end{bmatrix}
,\,
\begin{bmatrix}
<![CDATA[-3 & 1\\2 & 1]]>
\end{bmatrix}
}}
</equation>
of the vector space of $2\times 2$ matrices, $M_{22}$.  Is
$
C=
\begin{bmatrix}
<![CDATA[-3 & 3\\6 & -4]]>
\end{bmatrix}
$
an element of $W$?
</problem>
<solution contributor="robertbeezer">In order to belong to $W$, we must be able to express $C$ as a linear combination of the elements in the spanning set of $W$.  So we begin with such an expression, using the unknowns $a,\,b,\,c$ for the scalars in the linear combination.
<equation>
C=
\begin{bmatrix}
<![CDATA[-3 & 3\\6 & -4]]>
\end{bmatrix}
=
a
\begin{bmatrix}
<![CDATA[2 & 1\\3 & -1]]>
\end{bmatrix}
+b
\begin{bmatrix}
<![CDATA[4 & 0\\2 & 3]]>
\end{bmatrix}
+c
\begin{bmatrix}
<![CDATA[-3 & 1\\2 & 1]]>
\end{bmatrix}
</equation>
Massaging the right-hand side, according to the definition of the vector space operations in $M_{22}$ (<acroref type="example" acro="VSM" />), we find the matrix equality,
<equation>
\begin{bmatrix}
<![CDATA[-3 & 3\\6 & -4]]>
\end{bmatrix}
=
\begin{bmatrix}
<![CDATA[2a+4b-3c & a+c\\ 3a+2b+2c & -a+3b+c]]>
\end{bmatrix}
</equation>
Matrix equality allows us to form a system of four equations in three variables, whose augmented matrix row-reduces as follows,
<equation>
\begin{bmatrix}
<![CDATA[ 2 & 4 & -3 & -3 \\]]>
<![CDATA[ 1 & 0 & 1 & 3 \\]]>
<![CDATA[ 3 & 2 & 2 & 6 \\]]>
<![CDATA[ -1 & 3 & 1 & -4]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[ \leading{1} & 0 & 0 & 2 \\]]>
<![CDATA[ 0 & \leading{1} & 0 & -1 \\]]>
<![CDATA[ 0 & 0 & \leading{1} & 1 \\]]>
<![CDATA[ 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
Since this system of equations is consistent (<acroref type="theorem" acro="RCLS" />), a solution will provide values for $a,\,b$ and $c$ that allow us to recognize $C$ as an element of $W$.
</solution>
</exercise>

<exercise type="C" number="25" rough="Example NSC2Z fails both closure axioms">
<problem contributor="robertbeezer">Show that the set
$W=\setparts{\colvector{x_1\\x_2}}{3x_1-5x_2=12}$
from <acroref type="example" acro="NSC2Z" /> fails <acroref type="property" acro="AC" /> and <acroref type="property" acro="SC" />.
</problem>
</exercise>

<exercise type="C" number="26" rough="Example NSC2S has additive closure">
<problem contributor="robertbeezer">Show that the set
$Y=\setparts{\colvector{x_1\\x_2}}{x_1\in{\mathbb Z},\,x_2\in{\mathbb Z}}$
from <acroref type="example" acro="NSC2S" /> has <acroref type="property" acro="AC" />.
</problem>
</exercise>

<exercise type="M" number="20" rough="Subspace verification, 1 restriction on C^3">
<problem contributor="robertbeezer">In $\complex{3}$, the vector space of column vectors of size 3, prove that the set $Z$ is a subspace.
<equation>
Z=\setparts{\colvector{x_1\\x_2\\x_3}}{4x_1-x_2+5x_3=0}
</equation>
</problem>
<solution contributor="robertbeezer">The membership criteria for $Z$ is a single linear equation, which comprises a homogeneous system of equations.  As such, we can recognize $Z$ as the solutions to this system, and therefore $Z$ is a null space.  Specifically,
<![CDATA[$Z=\nsp{\begin{bmatrix}4&-1&5\end{bmatrix}}$.]]>
Every null space is a subspace by <acroref type="theorem" acro="NSMS" />.<br /><br />
A less direct solution appeals to <acroref type="theorem" acro="TSS" />.<br /><br />
First, we want to be certain $Z$ is non-empty.  The zero vector of $\complex{3}$, $\zerovector=\colvector{0\\0\\0}$, is a good candidate, since if it fails to be in $Z$, we will know that $Z$ is <em>not</em> a vector space.  Check that
<equation>
4(0)-(0)+5(0)=0
</equation>
so that $\zerovector\in Z$.<br /><br />
Suppose $\vect{x}=\colvector{x_1\\x_2\\x_3}$ and $\vect{y}=\colvector{y_1\\y_2\\y_3}$ are vectors from $Z$.  Then we know that these vectors cannot be totally arbitrary, they must have gained membership in $Z$ by virtue of meeting the membership test.  For example, we know that $\vect{x}$ must satisfy $4x_1-x_2+5x_3=0$ while $\vect{y}$ must satisfy $4y_1-y_2+5y_3=0$.  Our second criteria asks the question, is $\vect{x}+\vect{y}\in Z$?  Notice first that
<equation>
\vect{x}+\vect{y}=\colvector{x_1\\x_2\\x_3}+\colvector{y_1\\y_2\\y_3}=
\colvector{x_1+y_1\\x_2+y_2\\x_3+y_3}
</equation>
and we can test this vector for membership in $Z$ as follows,
<alignmath>
<![CDATA[&\ 4(x_1+y_1)-1(x_2+y_2)+5(x_3+y_3)\\]]>
<![CDATA[&=4x_1+4y_1-x_2-y_2+5x_3+5y_3\\]]>
<![CDATA[&=(4x_1-x_2+5x_3)+(4y_1-y_2+5y_3)\\]]>
<![CDATA[&=0 + 0&&\vect{x}\in Z,\ \vect{y}\in Z\\]]>
<![CDATA[&=0]]>
</alignmath>
and by this computation we see that $\vect{x}+\vect{y}\in Z$.<br /><br />
If $\alpha\in\complexes$ is a scalar and $\vect{x}\in Z$, is it always true that $\alpha\vect{x}\in Z$?    To check our third criteria, we examine
<equation>
\alpha\vect{x}=\alpha\colvector{x_1\\x_2\\x_3}=\colvector{\alpha x_1\\\alpha x_2\\\alpha x_3}
</equation>
and we can test this vector for membership in $Z$ with
<alignmath>
<![CDATA[&4(\alpha x_1)-(\alpha x_2)+5(\alpha x_3)\\]]>
<![CDATA[&\quad\quad=\alpha(4x_1-x_2+5x_3)\\]]>
<![CDATA[&\quad\quad=\alpha 0&&\vect{x}\in Z\\]]>
<![CDATA[&\quad\quad=0]]>
</alignmath>
and we see that indeed $\alpha\vect{x}\in Z$.  With the three conditions of <acroref type="theorem" acro="TSS" /> fulfilled, we can conclude that $Z$ is a subspace of $\complex{3}$.
</solution>
</exercise>

<exercise type="T" number="20" rough="n x n, upper triangulars is a subspace">
<problem contributor="robertbeezer">A square matrix $A$ of size $n$ is upper triangular if $\matrixentry{A}{ij}=0$ whenever $i>j$.  Let $UT_n$ be the set of all upper triangular matrices of size $n$.  Prove that $UT_n$ is a subspace of the vector space of all square matrices of size $n$, $M_{nn}$.
</problem>
<solution contributor="robertbeezer">Apply <acroref type="theorem" acro="TSS" />.<br /><br />
First, the zero vector of $M_{nn}$ is the zero matrix, $\zeromatrix$, whose entries are all zero (<acroref type="definition" acro="ZM" />).  This matrix then meets the condition that $\matrixentry{\zeromatrix}{ij}=0$ for $i>j$ and so is an element of $UT_n$.<br /><br />
Suppose $A,B\in UT_n$.  Is $A+B\in UT_n$?  We examine the entries of $A+B$ <q>below</q> the diagonal.  That is, in the following, assume that $i>j$.
<alignmath>
\matrixentry{A+B}{ij}
<![CDATA[&=\matrixentry{A}{ij}+\matrixentry{B}{ij}&&]]>\text{<acroref type="definition" acro="MA" />}\\
<![CDATA[&=0 + 0&&\text{$A,B\in UT_n$}\\]]>
<![CDATA[&=0]]>
</alignmath>
which qualifies $A+B$ for membership in $UT_n$.<br /><br />
Suppose $\alpha\in\complex{}$ and $A\in UT_n$.  Is $\alpha A\in UT_n$?  We examine the entries of $\alpha A$ <q>below</q> the diagonal.  That is, in the following, assume that $i>j$.
<alignmath>
\matrixentry{\alpha A}{ij}
<![CDATA[&=\alpha\matrixentry{A}{ij}&&]]>\text{<acroref type="definition" acro="MSM" />}\\
<![CDATA[&=\alpha 0&&\text{$A\in UT_n$}\\]]>
<![CDATA[&=0]]>
</alignmath>
which qualifies $\alpha A$ for membership in $UT_n$.<br /><br />
Having fulfilled the three conditions of <acroref type="theorem" acro="TSS" /> we see that $UT_n$ is a subspace of $M_{nn}$.
</solution>
</exercise>

<exercise type="T" number="30" rough="polys with only even terms are a subspace of the set of all polys">
<problem contributor="chrisblack">Let $P$ be the set of all polynomials, of any degree.  The set $P$ is a vector space.  Let $E$ be the subset of $P$ consisting of all polynomials with only terms of even degree.  Prove or disprove:  the set $E$ is a subspace of $P$.
</problem>
<solution contributor="chrisblack"><b>Proof:</b> Let $E$ be the subset of $P$ comprised of all polynomials with all terms of even degree.  Clearly the set $E$ is non-empty, as $z(x) = 0$ is a polynomial of even degree.  Let $p(x)$ and $q(x)$ be arbitrary elements of $E$.  Then there exist nonnegative integers $m$ and $n$ so that
<alignmath>
<![CDATA[p(x) &= a_0 + a_2 x^2 + a_4 x^4 + \cdots + a_{2n}x^{2n}\\]]>
<![CDATA[q(x) &= b_0 + b_2 x^2 + b_4 x^4 + \cdots + b_{2m}x^{2m}]]>
</alignmath>
for some constants $a_0, a_2, \ldots, a_{2n}$ and $b_0, b_2, \ldots, b_{2m}$.  Without loss of generality, we can assume that $m \le n$.  Thus, we have
<alignmath>
p(x) + q(x)
<![CDATA[&= (a_0 + b_0) + (a_2 + b_2)x^2 + \cdots + (a_{2m} + b_{2m})x^{2m} + a_{2m +2} x^{2m+2} + \cdots + a_{2n} x^{2n}]]>
</alignmath>
so $p(x) + q(x)$ has all even terms, and thus $p(x) + q(x) \in E$.  Similarly, let $\alpha$ be a scalar.  Then
<alignmath>
<![CDATA[\alpha p(x) &= \alpha (a_0 + a_2 x^2 + a_4 x^4 + \cdots + a_{2n}x^{2n}) \\]]>
<![CDATA[&= \alpha a_0 + (\alpha a_2) x^2 + (\alpha a_4) x^4 + \cdots + (\alpha a_{2n})x^{2n}]]>
</alignmath>
so that $\alpha p(x)$ also has only terms of even degree, and $\alpha p(x) \in E$. Thus, $E$ is a subspace of $P$.
</solution>
</exercise>

<exercise type="T" number="31" rough="Are all polys with only odd terms a subspace of the set of all polys?">
<problem contributor="chrisblack">Let $P$ be the set of all polynomials, of any degree.  The set $P$ is a vector space.  Let $F$ be the subset of $P$ consisting of all polynomials with only terms of odd degree.  Prove or disprove:  the set $F$ is a subspace of $P$.
</problem>
<solution contributor="chrisblack">This conjecture is false.  We know that the zero vector in $P$ is the polynomial $z(x) = 0$, which does not have odd degree.  Thus, the set $F$ does not contain the zero vector, and cannot be a vector space.
</solution>
</exercise>

</exercisesubsection>

</section>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.