Source

fcla / src / section-LT.xml

   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
<?xml version="1.0" encoding="UTF-8" ?>
<section acro="LT">
<title>Linear Transformations</title>

<!-- %%%%%%%%%% -->
<!-- % -->
<!-- %  Section LT -->
<!-- %  Linear Transformations -->
<!-- % -->
<!-- %%%%%%%%%% -->
<introduction>
<p>Early in <acroref type="chapter" acro="VS" /> we prefaced the definition of a vector space with the comment that it was <q>one of the two most important definitions in the entire course.</q>  Here comes the other.  Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces.  Here we go.</p>

</introduction>

<subsection acro="LT">
<title>Linear Transformations</title>

<definition acro="LT" index="linear transformation">
<title>Linear Transformation</title>
<p>A <define>linear transformation</define>, $\ltdefn{T}{U}{V}$, is a function that carries elements of the vector space $U$ (called the <define>domain</define>) to the vector space $V$ (called the <define>codomain</define>), and which has two additional properties
<ol><li> $\lt{T}{\vect{u}_1+\vect{u}_2}=\lt{T}{\vect{u}_1}+\lt{T}{\vect{u}_2}$ for all $\vect{u}_1,\,\vect{u}_2\in U$
</li><li> $\lt{T}{\alpha\vect{u}}=\alpha\lt{T}{\vect{u}}$ for all $\vect{u}\in U$ and all $\alpha\in\complex{\null}$
</li></ol>
</p>

<notation acro="LT" index="linear transformation">
<title>Linear Transformation</title>
<usage>$\ltdefn{T}{U}{V}$</usage>
</notation>
</definition>

<p>The two defining conditions in the definition of a linear transformation should <q>feel linear,</q> whatever that means.  Conversely, these two conditions could be taken as <em>exactly</em> what it means <em>to be</em> linear.  As every vector space property derives from vector addition and scalar multiplication, so too, every property of a linear transformation derives from these two defining properties.  While these conditions may be reminiscent of how we test subspaces, they really are quite different, so do not confuse the two.</p>

<p>Here are two diagrams that convey the essence of the two defining properties of a linear transformation.  In each case, begin in the upper left-hand corner, and follow the arrows around the rectangle to the lower-right hand corner, taking two different routes and doing the indicated operations labeled on the arrows.  There are two results there.  For a linear transformation these two expressions are always equal.
<diagram acro="DLTA">
<title>Definition of Linear Transformation, Additive</title>
<tikz>
\matrix (m) [matrix of math nodes, row sep=5em, column sep=10em, text height=1.5ex, text depth=0.25ex]
<![CDATA[{ \vect{u}_1,\,\vect{u}_2 & T(\vect{u}_1),\,T(\vect{u}_2) \\]]>
<![CDATA[\vect{u}_1+\vect{u}_2 & T(\vect{u}_1+\vect{u}_2)=T(\vect{u}_1)+T(\vect{u}_2)\\};]]>
\path[->]
(m-1-1) edge[thick] node[auto] {$T$} (m-1-2)
(m-1-2) edge[thick] node[auto] {$+$} (m-2-2)
(m-1-1) edge[thick] node[auto] {$+$} (m-2-1)
(m-2-1) edge[thick] node[auto] {$T$} (m-2-2);
</tikz>
</diagram>
<diagram acro="DLTM">
<title>Definition of Linear Transformation, Multiplicative</title>
<tikz>
\matrix (m) [matrix of math nodes, row sep=5em, column sep=10em, text height=1.5ex, text depth=0.25ex]
<![CDATA[{ \vect{u} & \lt{T}{\vect{u}} \\]]>
<![CDATA[\alpha\vect{u} & \lt{T}{\alpha\vect{u}}=\alpha\lt{T}{\vect{u}}\\};]]>
\path[->]
(m-1-1) edge[thick] node[auto] {$T$}      (m-1-2)
(m-1-2) edge[thick] node[auto] {$\alpha$} (m-2-2)
(m-1-1) edge[thick] node[auto] {$\alpha$} (m-2-1)
(m-2-1) edge[thick] node[auto] {$T$}      (m-2-2);
</tikz>
</diagram>
</p>

<p>A couple of words about notation.  $T$ is the <em>name</em> of the linear transformation, and should be used when we want to discuss the function as a whole.  $\lt{T}{\vect{u}}$ is how we talk about the output of the function, it is a vector in the vector space $V$.  When we write $\lt{T}{\vect{x}+\vect{y}}=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}$, the plus sign on the left is the operation of vector addition in the vector space $U$, since $\vect{x}$ and $\vect{y}$ are elements of $U$.  The plus sign on the right is the operation of vector addition in the vector space $V$, since $\lt{T}{\vect{x}}$ and $\lt{T}{\vect{y}}$ are elements of the vector space $V$.  These two instances of vector addition might be wildly different.</p>

<p>Let's examine several examples and begin to form a catalog of known linear transformations to work with.</p>

<example acro="ALT" index="linear transformation!checking">
<title>A linear transformation</title>

<p>Define $\ltdefn{T}{\complex{3}}{\complex{2}}$ by describing the output of the function for a generic input with the formula
<equation>
\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1+x_3\\-4x_2}
</equation>
and check the two defining properties.
<alignmath>
\lt{T}{\vect{x}+\vect{y}}
<![CDATA[&=\lt{T}{\colvector{x_1\\x_2\\x_3}+\colvector{y_1\\y_2\\y_3}}\\]]>
<![CDATA[&=\lt{T}{\colvector{x_1+y_1\\x_2+y_2\\x_3+y_3}}\\]]>
<![CDATA[&=\colvector{2(x_1+y_1)+(x_3+y_3)\\-4(x_2+y_2)}\\]]>
<![CDATA[&=\colvector{(2x_1+x_3)+(2y_1+y_3)\\-4x_2+(-4)y_2}\\]]>
<![CDATA[&=\colvector{2x_1+x_3\\-4x_2}+\colvector{2y_1+y_3\\-4y_2}\\]]>
<![CDATA[&=\lt{T}{\colvector{x_1\\x_2\\x_3}}+\lt{T}{\colvector{y_1\\y_2\\y_3}}\\]]>
<![CDATA[&=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}]]>
<intertext>and</intertext>
\lt{T}{\alpha\vect{x}}
<![CDATA[&=\lt{T}{\alpha\colvector{x_1\\x_2\\x_3}}\\]]>
<![CDATA[&=\lt{T}{\colvector{\alpha x_1\\\alpha x_2\\\alpha x_3}}\\]]>
<![CDATA[&=\colvector{2(\alpha x_1)+(\alpha x_3)\\-4(\alpha x_2)}\\]]>
<![CDATA[&=\colvector{\alpha(2x_1+x_3)\\\alpha(-4x_2)}\\]]>
<![CDATA[&=\alpha\colvector{2x_1+x_3\\-4x_2}\\]]>
<![CDATA[&=\alpha\lt{T}{\colvector{x_1\\x_2\\x_3}}\\]]>
<![CDATA[&=\alpha\lt{T}{\vect{x}}\\]]>
</alignmath>
</p>

<p>So by <acroref type="definition" acro="LT" />, $T$ is a linear transformation.</p>

</example>

<p>It can be just as instructive to look at functions that are <em>not</em> linear transformations.  Since the defining conditions must be true for <em>all</em> vectors and scalars, it is enough to find just one situation where the properties fail.</p>

<example acro="NLT" index="linear transformation!not">
<title>Not a linear transformation</title>

<p>Define $\ltdefn{S}{\complex{3}}{\complex{3}}$ by
<equation>
\lt{S}{\colvector{x_1\\x_2\\x_3}}=\colvector{4x_1+2x_2\\0\\x_1+3x_3-2}
</equation>
</p>

<p>This function <q>looks</q> linear, but consider
<alignmath>
<![CDATA[3\,\lt{S}{\colvector{1\\2\\3}}&=3\,\colvector{8\\0\\8}=\colvector{24\\0\\24}]]>
<intertext>while</intertext>
<![CDATA[\lt{S}{3\,\colvector{1\\2\\3}}&=\lt{S}{\colvector{3\\6\\9}}=\colvector{24\\0\\28}]]>
</alignmath>
</p>

<p>So the second required property fails for the choice of $\alpha=3$ and $\vect{x}=\colvector{1\\2\\3}$ and by <acroref type="definition" acro="LT" />, $S$ is not a linear transformation.  It is just about as easy to find an example where the first defining property fails (try it!).  Notice that it is the <q>-2</q> in the third component of the definition of $S$ that prevents the function from being a linear transformation.</p>

</example>

<example acro="LTPM" index="linear transformation!polynomials to matrices">
<title>Linear transformation, polynomials to matrices</title>

<p>Define a linear transformation $\ltdefn{T}{P_3}{M_{22}}$ by
<equation>
<![CDATA[\lt{T}{a+bx+cx^2+dx^3}=\begin{bmatrix}a+b&a-2c\\d&b-d\end{bmatrix}]]>
</equation>
</p>

<p>We verify the two defining conditions of a linear transformations.
<alignmath>
<![CDATA[\lt{T}{\vect{x}+\vect{y}}&=]]>
\lt{T}{(a_1+b_1x+c_1x^2+d_1x^3)+(a_2+b_2x+c_2x^2+d_2x^3)}\\
<![CDATA[&=\lt{T}{(a_1+a_2)+(b_1+b_2)x+(c_1+c_2)x^2+(d_1+d_2)x^3}\\]]>
<![CDATA[&=\begin{bmatrix}]]>
<![CDATA[(a_1+a_2)+(b_1+b_2)&(a_1+a_2)-2(c_1+c_2)\\]]>
<![CDATA[d_1+d_2&(b_1+b_2)-(d_1+d_2)]]>
\end{bmatrix}\\
<![CDATA[&=\begin{bmatrix}]]>
<![CDATA[(a_1+b_1)+(a_2+b_2)&(a_1-2c_1)+(a_2-2c_2)\\]]>
<![CDATA[d_1+d_2&(b_1-d_1)+(b_2-d_2)]]>
\end{bmatrix}\\
<![CDATA[&=\begin{bmatrix}a_1+b_1&a_1-2c_1\\d_1&b_1-d_1\end{bmatrix}+]]>
<![CDATA[     \begin{bmatrix}a_2+b_2&a_2-2c_2\\d_2&b_2-d_2\end{bmatrix}\\]]>
<![CDATA[&=\lt{T}{a_1+b_1x+c_1x^2+d_1x^3}+\lt{T}{a_2+b_2x+c_2x^2+d_2x^3}\\]]>
<![CDATA[&=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}]]>
<intertext>and</intertext>
<![CDATA[\lt{T}{\alpha\vect{x}}&=\lt{T}{\alpha(a+bx+cx^2+dx^3)}\\]]>
<![CDATA[&=\lt{T}{(\alpha a)+(\alpha b)x+(\alpha c)x^2+(\alpha d)x^3}\\]]>
<![CDATA[&=\begin{bmatrix}]]>
<![CDATA[(\alpha a)+(\alpha b)&(\alpha a)-2(\alpha c)\\]]>
<![CDATA[\alpha d&(\alpha b)-(\alpha d)]]>
\end{bmatrix}\\
<![CDATA[&=\begin{bmatrix}]]>
<![CDATA[\alpha(a+b)&\alpha(a-2c)\\]]>
<![CDATA[\alpha d&\alpha(b-d)]]>
\end{bmatrix}\\
<![CDATA[&=\alpha\begin{bmatrix}a+b&a-2c\\d&b-d\end{bmatrix}\\]]>
<![CDATA[&=\alpha\lt{T}{a+bx+cx^2+dx^3}\\]]>
<![CDATA[&=\alpha\lt{T}{\vect{x}}]]>
</alignmath>
</p>

<p>So by <acroref type="definition" acro="LT" />, $T$ is a linear transformation.</p>

</example>

<example acro="LTPP" index="linear transformation! polynomials to polynomials">
<title>Linear transformation, polynomials to polynomials</title>

<p>Define a function $\ltdefn{S}{P_4}{P_5}$ by
<equation>
S(p(x))=(x-2)p(x)
</equation>
</p>

<p>Then
<alignmath>
<![CDATA[\lt{S}{p(x)+q(x)}&=(x-2)(p(x)+q(x))=(x-2)p(x)+(x-2)q(x)=\lt{S}{p(x)}+\lt{S}{q(x)}\\]]>
<![CDATA[\lt{S}{\alpha p(x)}&=(x-2)(\alpha p(x))=(x-2)\alpha p(x)=\alpha(x-2)p(x)=\alpha\lt{S}{p(x)}]]>
</alignmath>
</p>

<p>So by <acroref type="definition" acro="LT" />, $S$ is a linear transformation.</p>

</example>

<p>Linear transformations have many amazing properties, which we will investigate through the next few sections.  However, as a taste of things to come, here is a theorem we can prove now and put to use immediately.</p>

<theorem acro="LTTZZ" index="linear transformation!zero vector">
<title>Linear Transformations Take Zero to Zero</title>
<statement>
<p>Suppose $\ltdefn{T}{U}{V}$ is a linear transformation.  Then $\lt{T}{\zerovector}=\zerovector$.</p>

</statement>

<proof>
<p>The two zero vectors in the conclusion of the theorem are different.  The first is from $U$ while the second is from $V$.  We will subscript the zero vectors in this proof to highlight the distinction.  Think about your objects.  (This proof is contributed by <contributorname code="markshoemaker" />).
<alignmath>
\lt{T}{\zerovector_U}
<![CDATA[&=\lt{T}{0\zerovector_U}]]>
<![CDATA[&&]]>\text{<acroref type="theorem" acro="ZSSM" /> in $U$}\\
<![CDATA[&=0\lt{T}{\zerovector_U}]]>
<![CDATA[&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=\zerovector_V]]>
<![CDATA[&&]]>\text{<acroref type="theorem" acro="ZSSM" /> in $V$}
</alignmath>
</p>

</proof>
</theorem>

<p>Return to <acroref type="example" acro="NLT" /> and compute $\lt{S}{\colvector{0\\0\\0}}=\colvector{0\\0\\-2}$ to quickly see again that $S$ is not a linear transformation, while in <acroref type="example" acro="LTPM" />  compute
<![CDATA[$\lt{S}{0+0x+0x^2+0x^3}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$]]>
as an example of <acroref type="theorem" acro="LTTZZ" /> at work.</p>

<sageadvice acro="LTS" index="linear transformation!symbolic">
<title>Linear Transformations, Symbolic</title>
There are several ways to create a linear transformation in Sage, and many natural operations on these objects are possible.  As previously, rather than use the complex numbers as our number system, we will instead use the rational numbers, since Sage can model them exactly.  We will also use the following transformation repeatedly for examples, when no special properties are required:
<alignmath>
<![CDATA[&\ltdefn{T}{\complex{3}}{\complex{2}}\\]]>
<![CDATA[&\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{-x_1 + 2x_3\\x_1 + 3x_2 + 7x_3\\x_1 + x_2 + x_3\\2x_1 + 3x_2 + 5x_3}]]>
</alignmath>
To create this linear transformation in Sage, we first create a <q>symbolic</q> function, which requires that we also first define some symbolic variables which are <code>x1</code>, <code>x2</code> and <code>x3</code> in this case.  (You can bypass the intermediate variable <code>outputs</code> in your own work.  We will use it consistently to allow us to spread the definition across several lines without the Sage preparser getting in the way.  In other words, it is safe to combine the the two lines below and not use <code>outputs</code>.)
<sage>
<input>x1, x2, x3 = var('x1, x2, x3')
outputs = [ -x1        + 2*x3,
             x1 + 3*x2 + 7*x3,
             x1 +   x2 +   x3,
           2*x1 + 3*x2 + 5*x3]
T_symbolic(x1, x2, x3) = outputs
</input>
</sage>

You can experiment with <code>T_symbolic</code>, evaluating it at triples of rational numbers, and perhaps doing things like calculating its partial derivatives.  We will use it as input to the <code>linear_transformation()</code> constructor.  We just need to specify carefully the domain and codomain, now as vector spaces over the rationals rather than the complex numbers.
<sage>
<input>T = linear_transformation(QQ^3, QQ^4, T_symbolic)
</input>
</sage>

You can now, of course, experiment with <code>T</code> via tab-completion, but we will explain the various properties of a Sage linear transformations as we work through this chapter.  Even some seemingly simple operations, such as printing <code>T</code> will require some explanation.  But for starters, we can evaluate <code>T</code>.
<sage>
<input>u = vector(QQ, [3, -1, 2])
w = T(u)
w
</input>
<output>(1, 14, 4, 13)
</output>
</sage>

<sage>
<input>w.parent()
</input>
<output>Vector space of dimension 4 over Rational Field
</output>
</sage>

Here is a small verification of <acroref type="theorem" acro="LTTZZ" />.
<sage>
<input>zero_in = zero_vector(QQ, 3)
zero_out = zero_vector(QQ, 4)
T(zero_in) == zero_out
</input>
<output>True
</output>
</sage>

Note that Sage will recognize some symbolic functions as not being linear transformations (in other words, inconsistent with <acroref type="definition" acro="LT" />), but this detection is fairly easy to fool.  We will see some safer ways to create a linear transformation shortly.


</sageadvice>
</subsection>

<subsection acro="LTC">
<title>Linear Transformation Cartoons</title>

<p>Throughout this chapter, and <acroref type="chapter" acro="R" />, we will include drawings of linear transformations.  We will call them <q>cartoons,</q> not because they are humorous, but because they will only expose a portion of the truth.  A Bugs Bunny cartoon might give us some insights on human nature, but the rules of physics and biology are routinely (and grossly) violated.  So it will be with our <define>linear transformation cartoons</define>.  Here is our first, followed by a guide to help you understand how these are meant to describe fundamental truths about linear transformations, while simultaneously violating other truths.
<diagram acro="GLT">
<title>General Linear Transformation</title>
<tikz>
\tikzset{ltvect/.style={shape=circle, minimum size=0.30em, inner sep=0pt, draw, fill=black}}
<![CDATA[\tikzset{ltedge/.style={->, bend left=20, thick, shorten <=0.1em, shorten >=0.1em}}]]>
<!--  base generic picture, equal ovals -->
<!--  vertical axes at x = 5, x = 20  space is [x=10 to x=15] -->
\draw ( 5em, 8em) circle [x radius=5em, y radius=8em, thick];
\draw (20em, 8em) circle [x radius=5em, y radius=8em, thick];
\node (U) at ( 5em, -1em) {$U$};
\node (V) at (20em, -1em) {$V$};
\draw[->, thick, draw] (U) to node[auto] {$T$} (V);
<!--  inputs -->
\node (w)     [ltvect, label=left:$\vect{w}$]      at (5em, 13em) {};
\node (u)     [ltvect, label=left:$\vect{u}$]      at (5em, 11em) {};
\node (zeroU) [ltvect, label=left:$\zerovector_U$] at (5em,  8em) {};
\node (x)     [ltvect, label=left:$\vect{x}$]      at (5em,  5em) {};
<!--  outputs -->
\node (v)     [ltvect, label=right:$\vect{v}$]      at (20em, 12em) {};
\node (zeroV) [ltvect, label=right:$\zerovector_V$] at (20em,  8em) {};
\node (y)     [ltvect, label=right:$\vect{y}$]      at (20em,  5em) {};
\node (t)     [ltvect, label=right:$\vect{t}$]      at (20em,  3em) {};
<!--  associations -->
\draw[ltedge] (u)     to (v);
\draw[ltedge] (w)     to (v);
\draw[ltedge] (zeroU) to (zeroV);
\draw[ltedge] (x)     to (y);
</tikz>
</diagram>
</p>

<p>Here we picture a linear transformation $\ltdefn{T}{U}{V}$, where this information will be consistently displayed along the bottom edge.  The ovals are meant to represent the vector spaces, in this case $U$, the domain, on the left and $V$, the codomain, on the right.  Of course, vector spaces are typically infinite sets, so you'll have to imagine that characteristic of these sets.  A small dot inside of an oval will represent a vector within that vector space, sometimes with a name, sometimes not (in this case every vector has a name).  The sizes of the ovals are meant to be proportional to the dimensions of the vector spaces.  However, when we make no assumptions about the dimensions, we will draw the ovals as the same size, as we have done here (which is not meant to suggest that the dimensions have to be equal).</p>

<p>To convey that the linear transformation associates a certain input with a certain output, we will draw an arrow from the input to the output.  So, for example, in this cartoon we suggest that $\lt{T}{\vect{x}}=\vect{y}$.  Nothing in the definition of a linear transformation prevents two different inputs being sent to the same output and we see this in $\lt{T}{\vect{u}}=\vect{v}=\lt{T}{\vect{w}}$.  Similarly, an output may not have any input being sent its way, as illustrated by no arrow pointing at $\vect{t}$.  In this cartoon, we have captured the essence of our one general theorem about linear transformations, <acroref type="theorem" acro="LTTZZ" />, $\lt{T}{\zerovector_U}=\zerovector_V$.  On occasion we might include this basic fact when it is relevant, at other times maybe not.  Note that the definition of a linear transformation requires that it be a function, so every element of the domain should be associated with some element of the codomain.  This will be reflected by never having an element of the domain without an arrow originating there.</p>

<p>These cartoons are of course no substitute for careful definitions and proofs, but they can be a handy way to think about the various properties we will be studying.</p>

</subsection>

<subsection acro="MLT">
<title>Matrices and Linear Transformations</title>

<p>If you give me a matrix, then I can quickly build you a linear transformation.  Always.  First a motivating example and then the theorem.</p>

<example acro="LTM" index="linear transformation!defined by a matrix">
<title>Linear transformation from a matrix</title>

<p>Let
<equation>
A=
\begin{bmatrix}
<![CDATA[3&-1&8&1\\]]>
<![CDATA[2&0&5&-2\\]]>
<![CDATA[1&1&3&-7]]>
\end{bmatrix}
</equation>
and define a function $\ltdefn{P}{\complex{4}}{\complex{3}}$ by
<equation>
\lt{P}{\vect{x}}=A\vect{x}
</equation>
</p>

<p>So we are using an old friend, the matrix-vector product (<acroref type="definition" acro="MVP" />) as a way to convert a vector with 4 components into a vector with 3 components.  Applying <acroref type="definition" acro="MVP" /> allows us to write the defining formula for $P$ in a slightly different form,
<equation>
\lt{P}{\vect{x}}=A\vect{x}=
\begin{bmatrix}
<![CDATA[3&-1&8&1\\]]>
<![CDATA[2&0&5&-2\\]]>
<![CDATA[1&1&3&-7]]>
\end{bmatrix}
\colvector{x_1\\x_2\\x_3\\x_4}
=
x_1\colvector{3\\2\\1}+
x_2\colvector{-1\\0\\1}+
x_3\colvector{8\\5\\3}+
x_4\colvector{1\\-2\\-7}
</equation>
</p>

<p>So we recognize the action of the function $P$ as using the components of the vector ($x_1,\,x_2,\,x_3,\,x_4$) as scalars to form the output of $P$ as a linear combination of the four columns of the matrix $A$, which are all members of $\complex{3}$, so the result is a vector in $\complex{3}$.  We can rearrange this expression further, using our definitions of operations in $\complex{3}$ (<acroref type="section" acro="VO" />).
<alignmath>
\lt{P}{\vect{x}}
<![CDATA[&=A\vect{x}&&\text{Definition of $P$}\\]]>
<![CDATA[&=]]>
x_1\colvector{3\\2\\1}+
x_2\colvector{-1\\0\\1}+
x_3\colvector{8\\5\\3}+
<![CDATA[x_4\colvector{1\\-2\\-7}&&]]>\text{<acroref type="definition" acro="MVP" />}\\
<![CDATA[&=]]>
\colvector{3x_1\\2x_1\\x_1}+
\colvector{-x_2\\0\\x_2}+
\colvector{8x_3\\5x_3\\3x_3}+
<![CDATA[\colvector{x_4\\-2x_4\\-7x_4}&&]]>\text{<acroref type="definition" acro="CVSM" />}\\
<![CDATA[&=\colvector{3x_1-x_2+8x_3+x_4\\2x_1+5x_3-2x_4\\x_1+x_2+3x_3-7x_4}&&]]>\text{<acroref type="definition" acro="CVA" />}
</alignmath>
</p>

<p>You might recognize this final expression as being similar in style to some previous examples (<acroref type="example" acro="ALT" />) and some linear transformations defined in the archetypes (<acroref type="archetype" acro="M" /> through <acroref type="archetype" acro="R" />).  But the expression that says the output of this linear transformation is a linear combination of the columns of $A$ is probably the most powerful way of thinking about examples of this type.</p>

<p>Almost forgot <mdash /> we should verify that $P$ is indeed a linear transformation.  This is easy with two matrix properties from <acroref type="section" acro="MM" />.
<alignmath>
\lt{P}{\vect{x}+\vect{y}}
<![CDATA[&=A\left(\vect{x}+\vect{y}\right)&&\text{Definition of $P$}\\]]>
<![CDATA[&=A\vect{x}+A\vect{y}&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
<![CDATA[&=\lt{P}{\vect{x}}+\lt{P}{\vect{y}}&&\text{Definition of $P$}]]>
<intertext>and</intertext>
\lt{P}{\alpha\vect{x}}
<![CDATA[&=A\left(\alpha\vect{x}\right)&&\text{Definition of $P$}\\]]>
<![CDATA[&=\alpha\left(A\vect{x}\right)&&]]>\text{<acroref type="theorem" acro="MMSMM" />}\\
<![CDATA[&=\alpha\lt{P}{\vect{x}}&&\text{Definition of $P$}]]>
</alignmath>
</p>

<p>So by <acroref type="definition" acro="LT" />, $P$ is a linear transformation.</p>

</example>

<p>So the multiplication of a vector by a matrix <q>transforms</q> the input vector into an output vector, possibly of a different size, by performing a linear combination.  And this transformation happens in a <q>linear</q> fashion.  This <q>functional</q> view of the matrix-vector product is the most important shift you can make right now in how you think about linear algebra.  Here's the theorem, whose proof is very nearly an exact copy of the verification in the last example.</p>

<theorem acro="MBLT" index="linear transformations!from matrices">
<title>Matrices Build Linear Transformations</title>
<statement>
<p>Suppose that $A$ is an $m\times n$ matrix.  Define a function $\ltdefn{T}{\complex{n}}{\complex{m}}$ by $\lt{T}{\vect{x}}=A\vect{x}$.  Then $T$ is a linear transformation.</p>

</statement>

<proof>
<p>
<alignmath>
\lt{T}{\vect{x}+\vect{y}}
<![CDATA[&=A\left(\vect{x}+\vect{y}\right)&&\text{Definition of $T$}\\]]>
<![CDATA[&=A\vect{x}+A\vect{y}&&]]>\text{<acroref type="theorem" acro="MMDAA" />}\\
<![CDATA[&=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}&&\text{Definition of $T$}]]>
<intertext>and</intertext>
\lt{T}{\alpha\vect{x}}
<![CDATA[&=A\left(\alpha\vect{x}\right)&&\text{Definition of $T$}\\]]>
<![CDATA[&=\alpha\left(A\vect{x}\right)&&]]>\text{<acroref type="theorem" acro="MMSMM" />}\\
<![CDATA[&=\alpha\lt{T}{\vect{x}}&&\text{Definition of $T$}]]>
</alignmath>
</p>

<p>So by <acroref type="definition" acro="LT" />, $T$ is a linear transformation.</p>

</proof>
</theorem>

<p>So <acroref type="theorem" acro="MBLT" /> gives us a rapid way to construct linear transformations.  Grab an $m\times n$ matrix $A$, define $\lt{T}{\vect{x}}=A\vect{x}$ and <acroref type="theorem" acro="MBLT" /> tells us that $T$ is a linear transformation from $\complex{n}$ to $\complex{m}$, without any further checking.</p>

<p>We can turn <acroref type="theorem" acro="MBLT" /> around.  You give me a linear transformation and I will give you a matrix.</p>

<example acro="MFLT" index="linear transformation!matrix of">
<title>Matrix from a linear transformation</title>

<p>Define the function $\ltdefn{R}{\complex{3}}{\complex{4}}$ by
<equation>
\lt{R}{\colvector{x_1\\x_2\\x_3}}=
\colvector{2x_1-3x_2+4x_3\\x_1+x_2+x_3\\-x_1+5x_2-3x_3\\x_2-4x_3}
</equation></p>

<p>You could verify that $R$ is a linear transformation by applying the definition, but we will instead massage the expression defining a typical output until we recognize the form of a known class of linear transformations.
<alignmath>
<![CDATA[\lt{R}{\colvector{x_1\\x_2\\x_3}}&=]]>
\colvector{2x_1-3x_2+4x_3\\x_1+x_2+x_3\\-x_1+5x_2-3x_3\\x_2-4x_3}\\
<![CDATA[&=]]>
\colvector{2x_1\\x_1\\-x_1\\0}+
\colvector{-3x_2\\x_2\\5x_2\\x_2}+
<![CDATA[\colvector{4x_3\\x_3\\-3x_3\\-4x_3}&&]]>\text{<acroref type="definition" acro="CVA" />}\\
<![CDATA[&=]]>
x_1\colvector{2\\1\\-1\\0}+
x_2\colvector{-3\\1\\5\\1}+\
<![CDATA[x_3\colvector{4\\1\\-3\\-4}&&]]>\text{<acroref type="definition" acro="CVSM" />}\\
<![CDATA[&=]]>
\begin{bmatrix}
<![CDATA[2&-3&4\\]]>
<![CDATA[1&1&1\\]]>
<![CDATA[-1&5&-3\\]]>
<![CDATA[0&1&-4]]>
\end{bmatrix}
<![CDATA[\colvector{x_1\\x_2\\x_3}&&]]>\text{<acroref type="definition" acro="MVP" />}
</alignmath>
</p>

<p>So if we define the matrix
<equation>
B=
\begin{bmatrix}
<![CDATA[2&-3&4\\]]>
<![CDATA[1&1&1\\]]>
<![CDATA[-1&5&-3\\]]>
<![CDATA[0&1&-4]]>
\end{bmatrix}
</equation>
then $\lt{R}{\vect{x}}=B\vect{x}$.  By <acroref type="theorem" acro="MBLT" />, we can easily recognize $R$ as a linear transformation since it has the form described in the hypothesis of the theorem.</p>

</example>

<p><acroref type="example" acro="MFLT" /> was not an accident.  Consider any one of the archetypes where both the domain and codomain are sets of column vectors (<acroref type="archetype" acro="M" /> through <acroref type="archetype" acro="R" />) and you should be able to mimic the previous example.  Here's the theorem, which is notable since it is our first occasion to use the full power of the defining properties of a linear transformation when our hypothesis includes a linear transformation.</p>

<theorem acro="MLTCV" index="matrix!of a linear transformation">
<title>Matrix of a Linear Transformation, Column Vectors</title>
<statement>
<indexlocation index="linear transformation!matrix of" />
<p>Suppose that $\ltdefn{T}{\complex{n}}{\complex{m}}$ is a linear transformation.  Then there is an $m\times n$ matrix $A$ such that $\lt{T}{\vect{x}}=A\vect{x}$.</p>

</statement>

<proof>
<p>The conclusion says a certain matrix exists.  What better way to prove something exists than to actually build it?  So our proof will be constructive (<acroref type="technique" acro="C" />), and the procedure that we will use abstractly in the proof can be used concretely in specific examples.</p>

<p>Let $\vectorlist{e}{n}$ be the columns of the identity matrix of size $n$, $I_n$ (<acroref type="definition" acro="SUV" />).  Evaluate the linear transformation $T$ with each of these standard unit vectors as an input, and record the result.  In other words, define $n$ vectors in $\complex{m}$, $\vect{A}_i$, $1\leq i\leq n$ by
<equation>
\vect{A}_i=\lt{T}{\vect{e}_i}
</equation>
</p>

<p>Then package up these vectors as the columns of a matrix
<equation>
A=\matrixcolumns{A}{n}
</equation>
</p>

<p>Does $A$ have the desired properties?  First, $A$ is clearly an $m\times n$ matrix.  Then
<alignmath>
\lt{T}{\vect{x}}
<![CDATA[&=\lt{T}{I_n\vect{x}}]]>
<![CDATA[&&]]>\text{<acroref type="theorem" acro="MMIM" />}\\
<![CDATA[&=\lt{T}{\matrixcolumns{e}{n}\vect{x}}]]>
<![CDATA[&&]]>\text{<acroref type="definition" acro="SUV" />}\\
<![CDATA[&=\lt{T}{]]>
\vectorentry{\vect{x}}{1}\vect{e}_1+
\vectorentry{\vect{x}}{2}\vect{e}_2+
\vectorentry{\vect{x}}{3}\vect{e}_3+
\cdots+
\vectorentry{\vect{x}}{n}\vect{e}_n
}
<![CDATA[&&]]>\text{<acroref type="definition" acro="MVP" />}\\
<![CDATA[&=]]>
\lt{T}{\vectorentry{\vect{x}}{1}\vect{e}_1}+
\lt{T}{\vectorentry{\vect{x}}{2}\vect{e}_2}+
\lt{T}{\vectorentry{\vect{x}}{3}\vect{e}_3}+
\cdots+
\lt{T}{\vectorentry{\vect{x}}{n}\vect{e}_n}
<![CDATA[&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=]]>
\vectorentry{\vect{x}}{1}\lt{T}{\vect{e}_1}+
\vectorentry{\vect{x}}{2}\lt{T}{\vect{e}_2}+
\vectorentry{\vect{x}}{3}\lt{T}{\vect{e}_3}+
\cdots+
\vectorentry{\vect{x}}{n}\lt{T}{\vect{e}_n}
<![CDATA[&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=]]>
\vectorentry{\vect{x}}{1}{\vect{A}_1}+
\vectorentry{\vect{x}}{2}{\vect{A}_2}+
\vectorentry{\vect{x}}{3}{\vect{A}_3}+
\cdots+
\vectorentry{\vect{x}}{n}{\vect{A}_n}
<![CDATA[&&\text{Definition of $\vect{A}_i$}\\]]>
<![CDATA[&=A\vect{x}]]>
<![CDATA[&&]]>\text{<acroref type="definition" acro="MVP" />}
</alignmath>
as desired.</p>

</proof>
</theorem>

<p>So if we were to restrict our study of linear transformations to those where the domain and codomain are both vector spaces of column vectors (<acroref type="definition" acro="VSCV" />), every matrix leads to a linear transformation of this type (<acroref type="theorem" acro="MBLT" />), while every such linear transformation leads to a matrix (<acroref type="theorem" acro="MLTCV" />).  So matrices and linear transformations are fundamentally the same.  We call the matrix $A$ of <acroref type="theorem" acro="MLTCV" /> the <define>matrix representation</define> of $T$.</p>

<p>We have defined linear transformations for more general vector spaces than just $\complex{m}$, can we extend this correspondence between linear transformations and matrices to more general linear transformations (more general domains and codomains)?  Yes, and this is the main theme of <acroref type="chapter" acro="R" />.  Stay tuned.  For now, let's illustrate <acroref type="theorem" acro="MLTCV" /> with an example.</p>

<example acro="MOLT" index="linear transformation!matrix of">
<title>Matrix of a linear transformation</title>

<p>Suppose $\ltdefn{S}{\complex{3}}{\complex{4}}$ is defined by
<equation>
\lt{S}{\colvector{x_1\\x_2\\x_3}}=\colvector{3x_1-2x_2+5x_3\\x_1+x_2+x_3\\9x_1-2x_2+5x_3\\4x_2}
</equation>
</p>

<p>Then
<alignmath>
<![CDATA[\vect{C}_1&=\lt{S}{\vect{e_1}}=\lt{S}{\colvector{1\\0\\0}}=\colvector{3\\1\\9\\0}\\]]>
<![CDATA[\vect{C}_2&=\lt{S}{\vect{e_2}}=\lt{S}{\colvector{0\\1\\0}}=\colvector{-2\\1\\-2\\4}\\]]>
<![CDATA[\vect{C}_3&=\lt{S}{\vect{e_3}}=\lt{S}{\colvector{0\\0\\1}}=\colvector{5\\1\\5\\0}]]>
</alignmath>
so define
<equation>
C=\left[C_1|C_2|C_3\right]=
\begin{bmatrix}
<![CDATA[3&-2&5\\]]>
<![CDATA[1&1&1\\]]>
<![CDATA[9&-2&5\\]]>
<![CDATA[0&4&0]]>
\end{bmatrix}
</equation>
and <acroref type="theorem" acro="MLTCV" /> guarantees that $\lt{S}{\vect{x}}=C\vect{x}$.</p>

<p>As an illuminating exercise, let $\vect{z}=\colvector{2\\-3\\3}$ and compute $\lt{S}{\vect{z}}$ two different ways.  First, return to the definition of $S$ and evaluate $\lt{S}{\vect{z}}$ directly.  Then do the matrix-vector product $C\vect{z}$.  In both cases you should obtain the vector $\lt{S}{\vect{z}}=\colvector{27\\2\\39\\-12}$.</p>

</example>

<sageadvice acro="LTM" index="linear transformation!matrices">
<title>Linear Transformations, Matrices</title>
A second way to build a linear transformation is to use a matrix, as motivated by <acroref type="theorem" acro="MBLT" />.  But there is one caveat.  We have seen that Sage has a preference for rows, so when defining a linear transformation with a product of a matrix and a vector, <em>Sage forms a linear combination of the rows of the matrix with the scalars of the vector</em>.  This is expressed by writing the vector on the left of the matrix, where if we were to interpret the vector as a 1-row matrix, then the definition of matrix multiplication would do the right thing.<br /><br />
Remember throughout, a linear transformation has very little to with the mechanism by which we define it.  Whether or not we use matrix multiplication with vectors on the left (Sage internally) or matrix multiplication with vectors on the right (your text), what matters is the <em>function</em> that results.  One concession to the <q>vector on the right</q> approach is that we can tell Sage that we mean for the matrix to define the linear transformation by multiplication with the vector on the right.  Here is our running example again <mdash /> with some explanation following.
<sage>
<input>A = matrix(QQ, [[-1, 0, 2],
                [ 1, 3, 7],
                [ 1, 1, 1],
                [ 2, 3, 5]])
T = linear_transformation(QQ^3, QQ^4, A, side='right')
T
</input>
<output>Vector space morphism represented by the matrix:
[-1  1  1  2]
[ 0  3  1  3]
[ 2  7  1  5]
Domain: Vector space of dimension 3 over Rational Field
Codomain: Vector space of dimension 4 over Rational Field
</output>
</sage>

The way <code>T</code> prints reflects the way Sage carries <code>T</code> internally.  But notice that we defined <code>T</code> in a way that is consistent with the text, via the use of the optional <code>side='right'</code> keyword.  If you rework examples from the text, or use Sage to assist with exercises, be sure to remember this option.  In particular, when the matrix is square it might be easy to miss that you have forgetten this option.  Note too that Sage uses a more general term for a linear transformation, <q>vector space morphism.</q>  Just mentally translate from Sage-speak to the terms we use here in the text.<br /><br />
If the standard way that Sage prints a linear transformation is too confusing, you can get all the relevant information with a handful of commands.
<sage>
<input>T.matrix(side='right')
</input>
<output>[-1  0  2]
[ 1  3  7]
[ 1  1  1]
[ 2  3  5]
</output>
</sage>

<sage>
<input>T.domain()
</input>
<output>Vector space of dimension 3 over Rational Field
</output>
</sage>

<sage>
<input>T.codomain()
</input>
<output>Vector space of dimension 4 over Rational Field
</output>
</sage>

So we can build a linear transformation in Sage from a matrix, as promised by <acroref type="theorem" acro="MBLT" />.  Furthermore, <acroref type="theorem" acro="MLTCV" /> says there is a matrix associated with every linear transformation.  This matrix is provided in Sage by the <code>.matrix()</code> method, where we use the option <code>side='right'</code> to be consistent with the text.  Here is <acroref type="example" acro="MOLT" /> reprised, where we define the linear transformation via a Sage symbolic function and then recover the matrix of the linear transformation.
<sage>
<input>x1, x2, x3 = var('x1, x2, x3')
outputs = [3*x1 - 2*x2 + 5*x3,
             x1 +   x2 +   x3,
           9*x1 - 2*x2 + 5*x3,
                  4*x2       ]
S_symbolic(x1, x2, x3) = outputs
S = linear_transformation(QQ^3, QQ^4, S_symbolic)
C = S.matrix(side='right'); C
</input>
<output>[ 3 -2  5]
[ 1  1  1]
[ 9 -2  5]
[ 0  4  0]
</output>
</sage>

<sage>
<input>x = vector(QQ, [2, -3, 3])
S(x) == C*x
</input>
<output>True
</output>
</sage>



</sageadvice>
</subsection>

<subsection acro="LTLC">
<title>Linear Transformations and Linear Combinations</title>

<p>It is the interaction between linear transformations and linear combinations that lies at the heart of many of the important theorems of linear algebra.  The next theorem distills the essence of this.  The proof is not deep, the result is hardly startling, but it will be referenced frequently.  We have already passed by one occasion to employ it, in the proof of <acroref type="theorem" acro="MLTCV" />.  Paraphrasing, this theorem says that we can <q>push</q> linear transformations <q>down into</q> linear combinations, or <q>pull</q> linear transformations <q>up out</q> of linear combinations.  We'll have opportunities to both push and pull.</p>

<theorem acro="LTLC" index="linear transformation!linear combination">
<title>Linear Transformations and Linear Combinations</title>
<statement>
<indexlocation index="linear combination!linear transformation" />
<p>Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation, $\vectorlist{u}{t}$ are vectors from $U$ and $\scalarlist{a}{t}$ are scalars from $\complex{\null}$.  Then
<equation>
\lt{T}{\lincombo{a}{u}{t}}
=
a_1\lt{T}{\vect{u}_1}+
a_2\lt{T}{\vect{u}_2}+
a_3\lt{T}{\vect{u}_3}+\cdots+
a_t\lt{T}{\vect{u}_t}
</equation>
</p>

</statement>

<proof>
<p>
<alignmath>
<![CDATA[&\lt{T}{\lincombo{a}{u}{t}}\\]]>
<![CDATA[&\quad\quad=]]>
\lt{T}{a_1\vect{u}_1}+
\lt{T}{a_2\vect{u}_2}+
\lt{T}{a_3\vect{u}_3}+\cdots+
<![CDATA[\lt{T}{a_t\vect{u}_t}&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&\quad\quad=]]>
a_1\lt{T}{\vect{u}_1}+
a_2\lt{T}{\vect{u}_2}+
a_3\lt{T}{\vect{u}_3}+\cdots+
<![CDATA[a_t\lt{T}{\vect{u}_t}&&]]>\text{<acroref type="definition" acro="LT" />}
</alignmath>
</p>

</proof>
</theorem>

<p>Some authors, especially in more advanced texts, take the conclusion of <acroref type="theorem" acro="LTLC" /> as the defining condition of a linear transformation.  This has the appeal of being a single condition, rather than the two-part condition of <acroref type="definition" acro="LT" />.  (See <acroref type="exercise" acro="LT.T20" />).</p>

<p>Our next theorem says, informally, that it is enough to know how a linear transformation behaves for inputs from any basis of the domain, and <em>all</em> the other outputs are described by a linear combination of these few values.  Again, the statement of the theorem, and its proof, are not remarkable, but the insight that goes along with it is very fundamental.</p>

<theorem acro="LTDB" index="linear transformation!defined on a basis">
<title>Linear Transformation Defined on a Basis</title>
<statement>
<p>Suppose $B=\set{\vectorlist{u}{n}}$ is a basis for the vector space $U$ and $\vectorlist{v}{n}$ is a list of vectors from the vector space $V$ (which are not necessarily distinct).   Then there is a unique linear transformation, $\ltdefn{T}{U}{V}$, such that $\lt{T}{\vect{u}_i}=\vect{v}_i$, $1\leq i\leq n$.</p>

</statement>

<proof>
<p>To prove the existence of $T$, we construct a function and show that it is a linear transformation (<acroref type="technique" acro="C" />).  Suppose $\vect{w}\in U$ is an arbitrary element of the domain.  Then by <acroref type="theorem" acro="VRRB" /> there are unique scalars $\scalarlist{a}{n}$ such that
<equation>
\vect{w}=\lincombo{a}{u}{n}
</equation>
</p>

<p>Then <em>define</em> the function $T$ by
<equation>
\lt{T}{\vect{w}}=\lincombo{a}{v}{n}
</equation>
</p>

<p>It should be clear that $T$ behaves as required for $n$ inputs from $B$.  Since the scalars provided by <acroref type="theorem" acro="VRRB" /> are unique, there is no ambiguity in this definition, and $T$ qualifies as a function with domain $U$ and codomain $V$ (<ie /> $T$ is well-defined).  But is $T$ a linear transformation as well?</p>

<p>Let $\vect{x}\in U$ be a second element of the domain, and suppose the scalars provided by <acroref type="theorem" acro="VRRB" /> (relative to $B$) are $\scalarlist{b}{n}$.  Then
<alignmath>
<![CDATA[\lt{T}{\vect{w}+\vect{x}}&=]]>
\lt{T}{
a_1\vect{u}_1+
a_2\vect{u}_2+
\cdots+
a_n\vect{u}_n+
b_1\vect{u}_1+
b_2\vect{u}_2+
\cdots+
b_n\vect{u}_n
}\\
<![CDATA[&=]]>
\lt{T}{
\left(a_1+b_1\right)\vect{u}_1+
\left(a_2+b_2\right)\vect{u}_2+
\cdots+
\left(a_n+b_n\right)\vect{u}_n
}
<![CDATA[&&]]>\text{<acroref type="definition" acro="VS" />}\\
<![CDATA[&=]]>
\left(a_1+b_1\right)\vect{v}_1+
\left(a_2+b_2\right)\vect{v}_2+
\cdots+
\left(a_n+b_n\right)\vect{v}_n
<![CDATA[&&\text{Definition of $T$}\\]]>
<![CDATA[&=]]>
a_1\vect{v}_1+
a_2\vect{v}_2+
\cdots+
a_n\vect{v}_n+
b_1\vect{v}_1+
b_2\vect{v}_2+
\cdots+
b_n\vect{v}_n
<![CDATA[&&]]>\text{<acroref type="definition" acro="VS" />}\\
<![CDATA[&=\lt{T}{\vect{w}}+\lt{T}{\vect{x}}]]>
</alignmath>
</p>

<p>Let $\alpha\in\complexes$ be any scalar.  Then
<alignmath>
<![CDATA[\lt{T}{\alpha\vect{w}}&=]]>
\lt{T}{\alpha\left(\lincombo{a}{u}{n}\right)}\\
<![CDATA[&=]]>
\lt{T}{\lincombo{\alpha a}{u}{n}}
<![CDATA[&&]]>\text{<acroref type="definition" acro="VS" />}\\
<![CDATA[&=\lincombo{\alpha a}{v}{n}]]>
<![CDATA[&&\text{Definition of $T$}\\]]>
<![CDATA[&=\alpha\left(\lincombo{a}{v}{n}\right)]]>
<![CDATA[&&]]>\text{<acroref type="definition" acro="VS" />}\\
<![CDATA[&=\alpha\lt{T}{\vect{w}}]]>
</alignmath></p>

<p>So by <acroref type="definition" acro="LT" />, $T$ is a linear transformation.</p>

<p>Is $T$ unique (among all linear transformations that take the $\vect{u}_i$ to the $\vect{v}_i$)?  Applying <acroref type="technique" acro="U" />, we posit the existence of a second linear transformation, $\ltdefn{S}{U}{V}$ such that $\lt{S}{\vect{u}_i}=\vect{v}_i$, $1\leq i\leq n$.  Again, let $\vect{w}\in U$ represent an arbitrary element of $U$ and let $\scalarlist{a}{n}$ be the scalars provided by <acroref type="theorem" acro="VRRB" /> (relative to $B$).  We have,
<alignmath>
<![CDATA[\lt{T}{\vect{w}}&=]]>
\lt{T}{\lincombo{a}{u}{n}}
<![CDATA[&&]]>\text{<acroref type="theorem" acro="VRRB" />}\\
<![CDATA[&=]]>
a_1\lt{T}{\vect{u}_1}+
a_2\lt{T}{\vect{u}_2}+
a_3\lt{T}{\vect{u}_3}+
\cdots+
a_n\lt{T}{\vect{u}_n}
<![CDATA[&&]]>\text{<acroref type="theorem" acro="LTLC" />}\\
<![CDATA[&=]]>
a_1\vect{v}_1+
a_2\vect{v}_2+
a_3\vect{v}_3+
\cdots+
a_n\vect{v}_n
<![CDATA[&&\text{Definition of $T$}\\]]>
<![CDATA[&=]]>
a_1\lt{S}{\vect{u}_1}+
a_2\lt{S}{\vect{u}_2}+
a_3\lt{S}{\vect{u}_3}+
\cdots+
a_n\lt{S}{\vect{u}_n}
<![CDATA[&&\text{Definition of $S$}\\]]>
<![CDATA[&=]]>
\lt{S}{\lincombo{a}{u}{n}}
<![CDATA[&&]]>\text{<acroref type="theorem" acro="LTLC" />}\\
<![CDATA[&=]]>
\lt{S}{\vect{w}}
<![CDATA[&&]]>\text{<acroref type="theorem" acro="VRRB" />}
</alignmath>
</p>

<p>So the output of $T$ and $S$ agree on every input, which means they are equal as functions, $T=S$.  So $T$ is unique.</p>

</proof>
</theorem>

<p>You might recall facts from analytic geometry, such as <q>any two points determine a line</q> and <q>any three non-collinear points determine a parabola.</q>  <acroref type="theorem" acro="LTDB" /> has much of the same feel.  By specifying the $n$ outputs for inputs from a basis, an entire linear transformation is determined.  The analogy is not perfect, but the style of these facts are not very dissimilar from <acroref type="theorem" acro="LTDB" />.</p>

<p>Notice that the statement of <acroref type="theorem" acro="LTDB" /> asserts the <em>existence</em> of a linear transformation with certain properties, while the proof shows us exactly how to define the desired linear transformation.  The next examples how to work with linear transformations that we find this way.</p>

<example acro="LTDB1" index="linear transformation!defined on a basis">
<title>Linear transformation defined on a basis</title>

<p>Consider the linear transformation $\ltdefn{T}{\complex{3}}{\complex{2}}$ that is required to have the following three values,
<alignmath>
<![CDATA[\lt{T}{\colvector{1\\0\\0}}=\colvector{2\\1}&&]]>
<![CDATA[\lt{T}{\colvector{0\\1\\0}}=\colvector{-1\\4}&&]]>
\lt{T}{\colvector{0\\0\\1}}=\colvector{6\\0}
</alignmath>
</p>

<p>Because
<equation>
B=\set{
\colvector{1\\0\\0},\,
\colvector{0\\1\\0},\,
\colvector{0\\0\\1}
}
</equation>
is a basis for $\complex{3}$ (<acroref type="theorem" acro="SUVB" />), <acroref type="theorem" acro="LTDB" /> says there is a unique linear transformation $T$ that behaves this way.</p>

<p>How do we compute other values of $T$?  Consider the input
<equation>
\vect{w}=\colvector{2\\-3\\1}=(2)\colvector{1\\0\\0}+(-3)\colvector{0\\1\\0}+(1)\colvector{0\\0\\1}
</equation>
</p>

<p>Then
<equation>
\lt{T}{\vect{w}}=(2)\colvector{2\\1}+ (-3)\colvector{-1\\4}+ (1)\colvector{6\\0}=\colvector{13\\-10}
</equation>
</p>

<p>Doing it again,
<equation>
\vect{x}=\colvector{5\\2\\-3}=(5)\colvector{1\\0\\0}+(2)\colvector{0\\1\\0}+(-3)\colvector{0\\0\\1}
</equation>
so
<equation>
\lt{T}{\vect{x}}=(5)\colvector{2\\1}+ (2)\colvector{-1\\4}+ (-3)\colvector{6\\0}=\colvector{-10\\13}
</equation>
</p>

<p>Any other value of $T$ could be computed in a similar manner.  So rather than being given a <em>formula</em> for the outputs of $T$, the <em>requirement</em> that $T$ behave in a certain way for the inputs chosen from a basis of the domain, is as sufficient as a formula for computing any value of the function.  You might notice some parallels between this example and <acroref type="example" acro="MOLT" /> or <acroref type="theorem" acro="MLTCV" />.</p>

</example>

<example acro="LTDB2" index="linear transformation!defined on a basis">
<title>Linear transformation defined on a basis</title>

<p>Consider the linear transformation $\ltdefn{R}{\complex{3}}{\complex{2}}$ with the three values,
<alignmath>
<![CDATA[\lt{R}{\colvector{1\\2\\1}}=\colvector{5\\-1}&&]]>
<![CDATA[\lt{R}{\colvector{-1\\5\\1}}=\colvector{0\\4}&&]]>
\lt{R}{\colvector{3\\1\\4}}=\colvector{2\\3}
</alignmath>
</p>

<p>You can check that
<equation>
D=\set{
\colvector{1\\2\\1},\,
\colvector{-1\\5\\1},\,
\colvector{3\\1\\4}
}
</equation>
is a basis for $\complex{3}$ (make the vectors the columns of a square matrix and check that the matrix is nonsingular,  <acroref type="theorem" acro="CNMB" />).  By <acroref type="theorem" acro="LTDB" /> we know there is a unique linear transformation $R$ with the three specified outputs.  However, we have to work just a bit harder to take an input vector and express it as a linear combination of the vectors in $D$.</p>

<p>For example, consider,
<equation>
\vect{y}=\colvector{8\\-3\\5}
</equation>
</p>

<p>Then we must first write $\vect{y}$ as a linear combination of the vectors in $D$ and solve for the unknown scalars, to arrive at
<equation>
\vect{y}=\colvector{8\\-3\\5}= (3)\colvector{1\\2\\1}+ (-2)\colvector{-1\\5\\1}+ (1)\colvector{3\\1\\4}
</equation>
</p>

<p>Then the proof of <acroref type="theorem" acro="LTDB" /> gives us
<equation>
\lt{R}{\vect{y}}=(3)\colvector{5\\-1}+ (-2)\colvector{0\\4}+ (1)\colvector{2\\3}= \colvector{17\\-8}
</equation>
</p>

<p>Any other value of $R$ could be computed in a similar manner.</p>

</example>

<p>Here is a third example of a linear transformation defined by its action on a basis, only with more abstract vector spaces involved.</p>

<example acro="LTDB3" index="linear transformation!defined on a basis">
<title>Linear transformation defined on a basis</title>

<p>The set $W=\set{p(x)\in P_3\mid p(1)=0, p(3)=0}\subseteq P_3$ is a subspace of the vector space of polynomials $P_3$.  This subspace has $C=\set{3-4x+x^2,\,12-13x+x^3}$ as a basis (check this!).  Suppose we consider the linear transformation $\ltdefn{S}{P_3}{M_{22}}$ with values
<alignmath>
<![CDATA[\lt{S}{3-4x+x^2}=\begin{bmatrix}1&-3\\2&0\end{bmatrix}&&]]>
<![CDATA[\lt{S}{12-13x+x^3}=\begin{bmatrix}0&1\\1&0\end{bmatrix}]]>
</alignmath>
</p>

<p>By <acroref type="theorem" acro="LTDB" /> we know there is a unique linear transformation with these two values.  To illustrate a sample computation of $S$, consider $q(x)=9-6x-5x^2+2x^3$.  Verify that $q(x)$ is an element of $W$ (does it have roots at $x=1$ and $x=3$?), then find the scalars needed to write it as a linear combination of the basis vectors in $C$.  Because
<equation>
q(x)=9-6x-5x^2+2x^3=(-5)(3-4x+x^2)+(2)(12-13x+x^3)
</equation>
</p>

<p>The proof of <acroref type="theorem" acro="LTDB" /> gives us
<equation>
<![CDATA[\lt{S}{q}=(-5)\begin{bmatrix}1&-3\\2&0\end{bmatrix}]]>
+
<![CDATA[(2)\begin{bmatrix}0&1\\1&0\end{bmatrix}]]>
=
<![CDATA[\begin{bmatrix}-5&17\\-8&0\end{bmatrix}]]>
</equation>
</p>

<p>And all the other outputs of $S$ could be computed in the same manner.  Every output of $S$ will have a zero in the second row, second column.  Can you see why this is so?</p>

</example>

<p>Informally, we can describe <acroref type="theorem" acro="LTDB" /> by saying <q>it is enough to know what a linear transformation does to a basis (of the domain).</q></p>

<sageadvice acro="LTB" index="linear transformation!bases">
<title>Linear Transformations, Bases</title>
A third way to create a linear transformation in Sage is to provide a list of images for a basis, as motivated by <acroref type="theorem" acro="LTDB" />.  The default is to use the standard basis as the inputs (<acroref type="definition" acro="SUV" />).
We will, once again, create our running example.
<sage>
<input>U = QQ^3
V = QQ^4
v1 = vector(QQ, [-1, 1, 1, 2])
v2 = vector(QQ, [ 0, 3, 1, 3])
v3 = vector(QQ, [ 2, 7, 1, 5])
T = linear_transformation(U, V, [v1, v2, v3])
T
</input>
<output>Vector space morphism represented by the matrix:
[-1  1  1  2]
[ 0  3  1  3]
[ 2  7  1  5]
Domain: Vector space of dimension 3 over Rational Field
Codomain: Vector space of dimension 4 over Rational Field
</output>
</sage>

Notice that there is no requirement that the list of images (in Sage or in <acroref type="theorem" acro="LTDB" />) to be a basis.  They do not even have to be different.  They could all be the zero vector (try it).<br /><br />
If we want to use an alternate basis for the domain, it is possible, but there are two caveats.  The first caveat is that we must be sure to provide a basis for the domain, Sage will give an error if we the proposed basis is not linearly independent and we are responsible for providing the right number of vectors (which should be easy).<br /><br />
We have seen that vector spaces can have alternate bases, which prints as a <q>user basis.</q>  Here will provide the domain with an alternate basis.  The relevant command will create a subspace, but for now, we need to provide a big enough set to create the entire domain.  It is possible to use fewer linearly independent vectors, and create a proper subspace, but then we will not be able to use this proper subspace to build the linear transformation we want.
<sage>
<input>u1 = vector(QQ, [ 1,  3, -4])
u2 = vector(QQ, [-1, -2,  3])
u3 = vector(QQ, [ 1,  1, -3])
U = (QQ^3).subspace_with_basis([u1, u2, u3])
U == QQ^3
</input>
<output>True
</output>
</sage>

<sage>
<input>U.basis_matrix()
</input>
<output>[ 1  3 -4]
[-1 -2  3]
[ 1  1 -3]
</output>
</sage>

<sage>
<input>U.echelonized_basis_matrix()
</input>
<output>[1 0 0]
[0 1 0]
[0 0 1]
</output>
</sage>

We can use this alternate version of <code>U</code> to create a linear transformation from specified images.  Superfically there is nothing real special about our choices for <code>v1, v2, v3</code>.
<sage>
<input>V = QQ^4
v1 = vector(QQ, [-9, -18,  0, -9])
v2 = vector(QQ, [ 7,  14,  0,  7])
v3 = vector(QQ, [-7, -17, -1, -10])
</input>
</sage>

Now we create the linear transformation.  Here is the second caveat:  the matrix of the linear transformation is no longer that provided by <acroref type="theorem" acro="MLTCV" />.  It may be obvious where the matrix comes from, but a full understanding of its interpretation will have to wait until <acroref type="section" acro="MR" />.
<sage>
<input>S = linear_transformation(U, V, [v1, v2, v3])
S.matrix(side='right')
</input>
<output>[ -9   7  -7]
[-18  14 -17]
[  0   0  -1]
[ -9   7 -10]
</output>
</sage>

We suggested our choices for <code>v1, v2, v3</code> were <q>random.</q>  Not so <mdash /> the linear transformation <code>S</code> just created is equal to the linear transformation <code>T</code> above.  If you have run all the input in this subsection, in order, then you should be able to compare the <em>functions</em> <code>S</code> and <code>T</code>.  The next command should <em>always</em> produce <code>True</code>.
<sage>
<input>u = random_vector(QQ, 3)
T(u) == S(u)
</input>
<output>True
</output>
</sage>

Notice that <code>T == S</code> may not do what you expect here.  Instead, the linear transformation method <code>.is_equal_function()</code> will perform a conclusive check of equalit of two linear transformations as functions.
<sage>
<input>T.is_equal_function(S)
</input>
<output>True
</output>
</sage>

Can you reproduce this example?  In other words, define some linear transformation, any way you like.  Then give the domain an alternate basis and concoct the correct images to create a second linear transformation (by the method of this subsection) which is equal to the first.


</sageadvice>
</subsection>

<subsection acro="PI">
<title>Pre-Images</title>

<p>The definition of a function requires that for each input in the domain there is <em>exactly</em> one output in the codomain.  However, the correspondence does not have to behave the other way around.  A member of the codomain might have many inputs from the domain that create it, or it may have none at all.  To formalize our discussion of this aspect of linear transformations, we define the pre-image.</p>

<definition acro="PI" index="pre-image">
<title>Pre-Image</title>
<p>Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation.  For each $\vect{v}$, define the <define>pre-image</define> of $\vect{v}$ to be the subset of $U$ given by
<equation>
\preimage{T}{\vect{v}}=\setparts{\vect{u}\in U}{\lt{T}{\vect{u}}=\vect{v}}
</equation>
</p>

</definition>

<p>In other words, $\preimage{T}{\vect{v}}$ is the set of all those vectors in the domain $U$ that get <q>sent</q> to the vector $\vect{v}$.</p>

<!--  TODO:  All preimages form a partition of $U$, an equivalence relation is about.  Maybe to exercises. -->
<example acro="SPIAS" index="pre-images">
<title>Sample pre-images, Archetype S</title>

<p><acroref type="archetype" acro="S" /> is the linear transformation defined by
<equation>
<archetypepart acro="S" part="ltdefn" /></equation>
</p>

<p>We could compute a pre-image for every element of the codomain $M_{22}$.  However, even in a free textbook, we do not have the room to do that, so we will compute just two.</p>

<p>Choose
<equation>
\vect{v}=
\begin{bmatrix}
<![CDATA[2&1\\3&2]]>
\end{bmatrix}
\in M_{22}
</equation>
for no particular reason.  What is $\preimage{T}{\vect{v}}$?  Suppose $\vect{u}=\colvector{u_1\\u_2\\u_3}\in\preimage{T}{\vect{v}}$.  The condition that $\lt{T}{\vect{u}}=\vect{v}$ becomes
<equation>
\begin{bmatrix}
<![CDATA[2&1\\3&2]]>
\end{bmatrix}
=\vect{v}
=\lt{T}{\vect{u}}
=\lt{T}{\colvector{u_1\\u_2\\u_3}}\\
=\begin{bmatrix}
<![CDATA[u_1-u_2&2u_1+2u_2+u_3\\]]>
<![CDATA[3u_1+u_2+u_3&-2u_1-6u_2-2u_3]]>
\end{bmatrix}
</equation>
</p>

<p>Using matrix equality (<acroref type="definition" acro="ME" />), we arrive at a system of four equations in the three unknowns $u_1,\,u_2,\,u_3$ with an augmented matrix that we can row-reduce in the hunt for solutions,
<equation>
\begin{bmatrix}
<![CDATA[1 & -1 & 0 & 2\\]]>
<![CDATA[2 & 2 & 1 & 1\\]]>
<![CDATA[3 & 1 & 1 & 3\\]]>
<![CDATA[-2 & -6 & -2 & 2]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & \frac{1}{4} &  \frac{5}{4}\\]]>
<![CDATA[0 & \leading{1} & \frac{1}{4} &  -\frac{3}{4}\\]]>
<![CDATA[0 & 0 & 0 &  0\\]]>
<![CDATA[0 & 0 & 0 &  0]]>
\end{bmatrix}
</equation>
</p>

<p>We recognize this system as having infinitely many solutions described by the single free variable $u_3$.  Eventually obtaining the vector form of the solutions (<acroref type="theorem" acro="VFSLS" />), we can describe the preimage precisely as,
<alignmath>
<![CDATA[\preimage{T}{\vect{v}}&=\setparts{\vect{u}\in\complex{3}}{\lt{T}{\vect{u}}=\vect{v}}\\]]>
<![CDATA[&=\setparts{\colvector{u_1\\u_2\\u_3}}{u_1=\frac{5}{4}-\frac{1}{4}u_3,\,u_2=-\frac{3}{4}-\frac{1}{4}u_3}\\]]>
<![CDATA[&=\setparts{\colvector{\frac{5}{4}-\frac{1}{4}u_3\\-\frac{3}{4}-\frac{1}{4}u_3\\u_3}}{u_3\in\complex{3}}\\]]>
<![CDATA[&=\setparts{\colvector{\frac{5}{4}\\-\frac{3}{4}\\0}+u_3\colvector{-\frac{1}{4}\\-\frac{1}{4}\\1}}{u_3\in\complex{3}}\\]]>
<![CDATA[&=\colvector{\frac{5}{4}\\-\frac{3}{4}\\0}+\spn{\set{\colvector{-\frac{1}{4}\\-\frac{1}{4}\\1}}}]]>
</alignmath>
</p>

<p>This last line is merely a suggestive way of describing the set on the previous line.  You might create three or four vectors in the preimage, and evaluate $T$ with each.  Was the result what you expected?  For a hint of things to come, you might try evaluating $T$ with just the lone vector in the spanning set above.  What was the result?  Now take a look back at <acroref type="theorem" acro="PSPHS" />.  Hmmmm.</p>

<p>OK, let's compute another preimage, but with a different outcome this time.
Choose
<equation>
\vect{v}=
\begin{bmatrix}
<![CDATA[1&1\\2&4]]>
\end{bmatrix}
\in M_{22}
</equation>
</p>

<p>What is $\preimage{T}{\vect{v}}$?  Suppose $\vect{u}=\colvector{u_1\\u_2\\u_3}\in\preimage{T}{\vect{v}}$.  That $\lt{T}{\vect{u}}=\vect{v}$ becomes
<equation>
\begin{bmatrix}
<![CDATA[1&1\\2&4]]>
\end{bmatrix}
=\vect{v}
=\lt{T}{\vect{u}}
=\lt{T}{\colvector{u_1\\u_2\\u_3}}\\
=\begin{bmatrix}
<![CDATA[u_1-u_2&2u_1+2u_2+u_3\\]]>
<![CDATA[3u_1+u_2+u_3&-2u_1-6u_2-2u_3]]>
\end{bmatrix}
</equation>
</p>

<p>Using matrix equality (<acroref type="definition" acro="ME" />), we arrive at a system of four equations in the three unknowns $u_1,\,u_2,\,u_3$ with an augmented matrix that we can row-reduce in the hunt for solutions,
<equation>
\begin{bmatrix}
<![CDATA[1 & -1 & 0 & 1\\]]>
<![CDATA[2 & 2 & 1 & 1\\]]>
<![CDATA[3 & 1 & 1 & 2\\]]>
<![CDATA[-2 & -6 & -2 & 4]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[\leading{1} & 0 & \frac{1}{4} &  0\\]]>
<![CDATA[0 & \leading{1} & \frac{1}{4} &  0\\]]>
<![CDATA[0 & 0 & 0 &  \leading{1}\\]]>
<![CDATA[0 & 0 & 0 &  0]]>
\end{bmatrix}
</equation>
</p>

<p>By <acroref type="theorem" acro="RCLS" /> we recognize this system as inconsistent.  So no vector $\vect{u}$ is a member of $\preimage{T}{\vect{v}}$ and so
<equation>
\preimage{T}{\vect{v}}=\emptyset
</equation>
</p>

</example>

<p>The preimage is just a set, it is almost never a subspace of $U$ (you might think about just when $\preimage{T}{\vect{v}}$ is a subspace, see <acroref type="exercise" acro="ILT.T10" />).  We will describe its properties going forward, and it will be central to the main ideas of this chapter.</p>

<sageadvice acro="PI" index="pre-images">
<title>Pre-Images</title>
Sage handles pre-images just a bit differently than our approach in the text.  For the moment, we can obtain a single vector in the set that is the pre-image via the <code>.preimage_representative()</code> method.  Understand that this method will return <em>just one</em> element of the pre-image set, and we have no real control over which one.  Also, it is certainly possible that a pre-image is the empty set <mdash /> in this case, the method will raise a <code>ValueError</code>.  We will use our running example to illustrate.
<sage>
<input>A = matrix(QQ, [[-1, 0, 2],
                [ 1, 3, 7],
                [ 1, 1, 1],
                [ 2, 3, 5]])
T = linear_transformation(QQ^3, QQ^4, A, side='right')
v = vector(QQ, [1, 2, 0, 1])
u = T.preimage_representative(v)
u
</input>
<output>(-1, 1, 0)
</output>
</sage>

<sage>
<input>T(u) == v
</input>
<output>True
</output>
</sage>

<sage>
<input>T.preimage_representative(vector(QQ, [1, 2, 1, 1]))
</input>
<output>Traceback (most recent call last):
...
ValueError: element is not in the image
</output>
</sage>

Remember, we have defined the pre-image as a set, and Sage just gives us a single element of the set.  We will see in <acroref type="sage" acro="ILT" /> that the upcoming <acroref type="theorem" acro="KPI" /> explains why this is no great shortcoming in Sage.


</sageadvice>
</subsection>

<subsection acro="NLTFO">
<title>New Linear Transformations From Old</title>

<p>We can combine linear transformations in natural ways to create new linear transformations.  So we will define these combinations and then prove that the results really are still linear transformations.  First the sum of two linear transformations.</p>

<definition acro="LTA" index="linear transformation!addition">
<title>Linear Transformation Addition</title>
<p>Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{U}{V}$ are two linear transformations with the same domain and codomain.  Then their <define>sum</define> is the function $\ltdefn{T+S}{U}{V}$ whose outputs are defined by
<equation>
\lt{(T+S)}{\vect{u}}=\lt{T}{\vect{u}}+\lt{S}{\vect{u}}
</equation>
</p>

</definition>

<p>Notice that the first plus sign in the definition is the operation being defined, while the second one is the vector addition in $V$.  (Vector addition in $U$ will appear just now in the proof that $T+S$ is a linear transformation.)  <acroref type="definition" acro="LTA" /> only provides a function.  It would be nice to know that when the constituents ($T$, $S$) are linear transformations, then so too is $T+S$.</p>

<theorem acro="SLTLT" index="linear transformation!addition">
<title>Sum of Linear Transformations is a Linear Transformation</title>
<statement>
<p>Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{U}{V}$ are two linear transformations with the same domain and codomain.  Then $\ltdefn{T+S}{U}{V}$ is a linear transformation.</p>

</statement>

<proof>
<p>We simply check the defining properties of a linear transformation (<acroref type="definition" acro="LT" />).  This is a good place to consistently ask yourself which objects are being combined with which operations.
<alignmath>
<![CDATA[\lt{(T+S)}{\vect{x}+\vect{y}}&=]]>
<![CDATA[\lt{T}{\vect{x}+\vect{y}}+\lt{S}{\vect{x}+\vect{y}}&&]]>\text{<acroref type="definition" acro="LTA" />}\\
<![CDATA[&=\lt{T}{\vect{x}}+\lt{T}{\vect{y}}+\lt{S}{\vect{x}}+\lt{S}{\vect{y}}&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=\lt{T}{\vect{x}}+\lt{S}{\vect{x}}+\lt{T}{\vect{y}}+\lt{S}{\vect{y}}&&]]>\text{<acroref type="property" acro="C" /> in $V$}\\
<![CDATA[&=\lt{(T+S)}{\vect{x}}+\lt{(T+S)}{\vect{y}}&&]]>\text{<acroref type="definition" acro="LTA" />}\\
<intertext>and</intertext>
<![CDATA[\lt{(T+S)}{\alpha\vect{x}}&=]]>
<![CDATA[\lt{T}{\alpha\vect{x}}+\lt{S}{\alpha\vect{x}}&&]]>\text{<acroref type="definition" acro="LTA" />}\\
<![CDATA[&=\alpha\lt{T}{\vect{x}}+\alpha\lt{S}{\vect{x}}&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=\alpha\left(\lt{T}{\vect{x}}+\lt{S}{\vect{x}}\right)&&]]>\text{<acroref type="property" acro="DVA" /> in $V$}\\
<![CDATA[&=\alpha\lt{(T+S)}{\vect{x}}&&]]>\text{<acroref type="definition" acro="LTA" />}\\
</alignmath>
</p>

</proof>
</theorem>

<example acro="STLT" index="linear transformation!sum">
<title>Sum of two linear transformations</title>

<p>Suppose that $\ltdefn{T}{\complex{2}}{\complex{3}}$ and $\ltdefn{S}{\complex{2}}{\complex{3}}$ are defined by
<alignmath>
\lt{T}{\colvector{x_1\\x_2}}=\colvector{x_1+2x_2\\3x_1-4x_2\\5x_1+2x_2}
<![CDATA[&&]]>
\lt{S}{\colvector{x_1\\x_2}}=\colvector{4x_1-x_2\\x_1+3x_2\\-7x_1+5x_2}
</alignmath>
</p>

<p>Then by <acroref type="definition" acro="LTA" />, we have
<equation>
\lt{(T+S)}{\colvector{x_1\\x_2}}
=
\lt{T}{\colvector{x_1\\x_2}}+\lt{S}{\colvector{x_1\\x_2}}
=
\colvector{x_1+2x_2\\3x_1-4x_2\\5x_1+2x_2}+
\colvector{4x_1-x_2\\x_1+3x_2\\-7x_1+5x_2}
=
\colvector{5x_1+x_2\\4x_1-x_2\\-2x_1+7x_2}
</equation>
and by <acroref type="theorem" acro="SLTLT" /> we know $T+S$ is also a linear transformation from $\complex{2}$ to $\complex{3}$.
</p>

</example>

<definition acro="LTSM" index="linear transformation!scalar multiplication">
<title>Linear Transformation Scalar Multiplication</title>
<p>Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation and $\alpha\in\complex{\null}$.  Then the <define>scalar multiple</define> is the function $\ltdefn{\alpha T}{U}{V}$ whose outputs are defined by
<equation>
\lt{(\alpha T)}{\vect{u}}=\alpha\lt{T}{\vect{u}}
</equation>
</p>

</definition>

<p>Given that $T$ is a linear transformation, it would be nice to know that $\alpha T$ is also a linear transformation.</p>

<theorem acro="MLTLT" index="linear transformation!addition">
<title>Multiple of a Linear Transformation is a Linear Transformation</title>
<statement>
<p>Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation and $\alpha\in\complex{\null}$.  Then $\ltdefn{(\alpha T)}{U}{V}$ is a linear transformation.</p>

</statement>

<proof>
<p>We simply check the defining properties of a linear transformation (<acroref type="definition" acro="LT" />).  This is another good place to consistently ask yourself which objects are being combined with which operations.
<alignmath>
<![CDATA[\lt{(\alpha T)}{\vect{x}+\vect{y}}&=]]>
<![CDATA[\alpha\left(\lt{T}{\vect{x}+\vect{y}}\right)&&]]>\text{<acroref type="definition" acro="LTSM" />}\\
<![CDATA[&=\alpha\left(\lt{T}{\vect{x}}+\lt{T}{\vect{y}}\right)&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=\alpha\lt{T}{\vect{x}}+\alpha\lt{T}{\vect{y}}&&]]>\text{<acroref type="property" acro="DVA" /> in $V$}\\
<![CDATA[&=\lt{(\alpha T)}{\vect{x}}+\lt{(\alpha T)}{\vect{y}}&&]]>\text{<acroref type="definition" acro="LTSM" />}\\
<intertext>and</intertext>
<![CDATA[\lt{(\alpha T)}{\beta\vect{x}}&=]]>
<![CDATA[\alpha\lt{T}{\beta\vect{x}}&&]]>\text{<acroref type="definition" acro="LTSM" />}\\
<![CDATA[&=\alpha\left(\beta\lt{T}{\vect{x}}\right)&&]]>\text{<acroref type="definition" acro="LT" />}\\
<![CDATA[&=\left(\alpha\beta\right)\lt{T}{\vect{x}}&&]]>\text{<acroref type="property" acro="SMA" /> in $V$}\\
<![CDATA[&=\left(\beta\alpha\right)\lt{T}{\vect{x}}&&\text{Commutativity in $\complex{}$}\\]]>
<![CDATA[&=\beta\left(\alpha\lt{T}{\vect{x}}\right)&&]]>\text{<acroref type="property" acro="SMA" /> in $V$}\\
<![CDATA[&=\beta\left(\lt{(\alpha T)}{\vect{x}}\right)&&]]>\text{<acroref type="definition" acro="LTSM" />}\\
</alignmath>
</p>

</proof>
</theorem>

<example acro="SMLT" index="linear transformation!scalar multiple">
<title>Scalar multiple of a linear transformation</title>

<p>Suppose that $\ltdefn{T}{\complex{4}}{\complex{3}}$ is defined by
<equation>
\lt{T}{\colvector{x_1\\x_2\\x_3\\x_4}}=
\colvector{x_1+2x_2-x_3+2x_4\\x_1+5x_2-3x_3+x_4\\-2x_1+3x_2-4x_3+2x_4}
</equation>
</p>

<p>For the sake of an example, choose $\alpha=2$, so by <acroref type="definition" acro="LTSM" />, we have
<equation>
\lt{\alpha T}{\colvector{x_1\\x_2\\x_3\\x_4}}
=
2\lt{T}{\colvector{x_1\\x_2\\x_3\\x_4}}
=
2\colvector{x_1+2x_2-x_3+2x_4\\x_1+5x_2-3x_3+x_4\\-2x_1+3x_2-4x_3+2x_4}
=
\colvector{2x_1+4x_2-2x_3+4x_4\\2x_1+10x_2-6x_3+2x_4\\-4x_1+6x_2-8x_3+4x_4}
</equation>
and by <acroref type="theorem" acro="MLTLT" /> we know $2T$ is also a linear transformation from $\complex{4}$ to $\complex{3}$.</p>

</example>

<p>Now, let's imagine we have two vector spaces, $U$ and $V$, and we collect every possible linear transformation from $U$ to $V$ into one big set, and call it $\vslt{U}{V}$.  <acroref type="definition" acro="LTA" /> and <acroref type="definition" acro="LTSM" /> tell us how we can <q>add</q> and <q>scalar multiply</q> two elements of $\vslt{U}{V}$.  <acroref type="theorem" acro="SLTLT" /> and <acroref type="theorem" acro="MLTLT" /> tell us that if we do these operations, then the resulting functions are linear transformations that are also in $\vslt{U}{V}$.   Hmmmm, sounds like a vector space to me!  A set of objects, an addition and a scalar multiplication.  Why not?</p>

<theorem acro="VSLT" index="vector space!linear transformations">
<title>Vector Space of Linear Transformations</title>
<statement>
<indexlocation index="linear transformation!vector space of" />
<p>Suppose that $U$ and $V$ are vector spaces.  Then the set of all linear transformations from $U$ to $V$, $\vslt{U}{V}$ is a vector space when the operations are those given in <acroref type="definition" acro="LTA" /> and <acroref type="definition" acro="LTSM" />.</p>

</statement>

<proof>
<p><acroref type="theorem" acro="SLTLT" /> and <acroref type="theorem" acro="MLTLT" /> provide two of the ten properties in <acroref type="definition" acro="VS" />.  However, we still need to verify the remaining eight properties.  By and large, the proofs are straightforward and rely on concocting the obvious object, or by reducing the question to the same vector space property in the vector space $V$.</p>

<p>The zero vector is of some interest, though. What linear transformation would we add to any other linear transformation, so as to keep the second one unchanged?  The answer is $\ltdefn{Z}{U}{V}$ defined by $\lt{Z}{\vect{u}}=\zerovector_V$ for every $\vect{u}\in U$.  Notice how we do not need to know any of the specifics about $U$ and $V$ to make this definition of $Z$.</p>

</proof>
</theorem>

<definition acro="LTC" index="linear transformation!composition">
<title>Linear Transformation Composition</title>
<p>Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{V}{W}$ are linear transformations.  Then the <define>composition</define> of $S$ and $T$ is the function $\ltdefn{(\compose{S}{T})}{U}{W}$ whose outputs are defined by
<equation>
\lt{(\compose{S}{T})}{\vect{u}}=\lt{S}{\lt{T}{\vect{u}}}
</equation>
</p>

</definition>

<p>Given that $T$ and $S$ are linear transformations, it would be nice to know that $\compose{S}{T}$ is also a linear transformation.</p>

<theorem acro="CLTLT" index="linear transformation!composition">
<title>Composition of Linear Transformations is a Linear Transformation</title>
<statement>
<p>Suppose that $\ltdefn{T}{U}{V}$ and $\ltdefn{S}{V}{W}$ are linear transformations.  Then $\ltdefn{(\compose{S}{T})}{U}{W}$ is a linear transformation.</p>

</statement>

<proof>
<p>We simply check the defining properties of a linear transformation (<acroref type="definition" acro="LT" />).
<alignmath>
\lt{(\compose{S}{T})}{\vect{x}+\vect{y}}
<![CDATA[&=\lt{S}{\lt{T}{\vect{x}+\vect{y}}}&&]]>\text{<acroref type="definition" acro="LTC" />}\\
<![CDATA[&=\lt{S}{\lt{T}{\vect{x}}+\lt{T}{\vect{y}}}&&]]>\text{<acroref type="definition" acro="LT" /> for $T$}\\
<![CDATA[&=\lt{S}{\lt{T}{\vect{x}}}+\lt{S}{\lt{T}{\vect{y}}}&&]]>\text{<acroref type="definition" acro="LT" /> for $S$}\\
<![CDATA[&=\lt{(\compose{S}{T})}{\vect{x}}+\lt{(\compose{S}{T})}{\vect{y}}&&]]>\text{<acroref type="definition" acro="LTC" />}
<intertext>and</intertext>
\lt{(\compose{S}{T})}{\alpha\vect{x}}
<![CDATA[&=\lt{S}{\lt{T}{\alpha\vect{x}}}&&]]>\text{<acroref type="definition" acro="LTC" />}\\
<![CDATA[&=\lt{S}{\alpha\lt{T}{\vect{x}}}&&]]>\text{<acroref type="definition" acro="LT" /> for $T$}\\
<![CDATA[&=\alpha\lt{S}{\lt{T}{\vect{x}}}&&]]>\text{<acroref type="definition" acro="LT" /> for $S$}\\
<![CDATA[&=\alpha\lt{(\compose{S}{T})}{\vect{x}}&&]]>\text{<acroref type="definition" acro="LTC" />}
</alignmath>
</p>

</proof>
</theorem>

<example acro="CTLT" index="linear transformations!compositions">
<title>Composition of two linear transformations</title>

<p>Suppose that $\ltdefn{T}{\complex{2}}{\complex{4}}$ and $\ltdefn{S}{\complex{4}}{\complex{3}}$ are defined by
<alignmath>
\lt{T}{\colvector{x_1\\x_2}}=\colvector{x_1+2x_2\\3x_1-4x_2\\5x_1+2x_2\\6x_1-3x_2}
<![CDATA[&&]]>
\lt{S}{\colvector{x_1\\x_2\\x_3\\x_4}}=
\colvector{2x_1-x_2+x_3-x_4\\5x_1-3x_2+8x_3-2x_4\\-4x_1+3x_2-4x_3+5x_4}
</alignmath></p>

<p>Then by <acroref type="definition" acro="LTC" />
<alignmath>
<![CDATA[\lt{(\compose{S}{T})}{\colvector{x_1\\x_2}}&=]]>
\lt{S}{\lt{T}{\colvector{x_1\\x_2}}}\\
<![CDATA[&=\lt{S}{\colvector{x_1+2x_2\\3x_1-4x_2\\5x_1+2x_2\\6x_1-3x_2}}\\]]>
<![CDATA[&=\colvector{]]>
2(x_1+2x_2)-(3x_1-4x_2)+(5x_1+2x_2)-(6x_1-3x_2)\\
5(x_1+2x_2)-3(3x_1-4x_2)+8(5x_1+2x_2)-2(6x_1-3x_2)\\
-4(x_1+2x_2)+3(3x_1-4x_2)-4(5x_1+2x_2)+5(6x_1-3x_2)
}\\
<![CDATA[&=\colvector{]]>
-2x_1+13x_2\\
24x_1+44x_2\\
15x_1-43x_2
}
</alignmath>
and by <acroref type="theorem" acro="CLTLT" /> $\compose{S}{T}$ is a linear transformation from $\complex{2}$ to $\complex{3}$.</p>

</example>

<p>Here is an interesting exercise that will presage an important result later.
In <acroref type="example" acro="STLT" /> compute (via <acroref type="theorem" acro="MLTCV" />) the matrix of  $T$, $S$ and $T+S$.  Do you see a relationship between these three matrices?</p>

<p>In <acroref type="example" acro="SMLT" /> compute (via <acroref type="theorem" acro="MLTCV" />) the matrix of  $T$ and  $2T$.  Do you see a relationship between these two matrices?</p>

<p>Here's the tough one.  In <acroref type="example" acro="CTLT" /> compute (via <acroref type="theorem" acro="MLTCV" />) the matrix of  $T$, $S$ and $\compose{S}{T}$.  Do you see a relationship between these three matrices???</p>

<sageadvice acro="OLT" index="linear transformation!operations on">
<title>Operations on Linear Transformations</title>
It is possible in Sage to add linear transformations (<acroref type="definition" acro="LTA" />), multiply them by scalars (<acroref type="definition" acro="LTSM" />) and compose (<acroref type="definition" acro="LTC" />) them.  Then <acroref type="theorem" acro="SLTLT" /> <acroref type="theorem" acro="MLTLT" />, and <acroref type="theorem" acro="CLTLT" /> (respectively) tell us the results are again linear transformations.  Here are some examples:
<sage>
<input>U = QQ^4
V = QQ^2
A = matrix(QQ, 2, 4, [[-1, 3, 4,  5],
                      [ 2, 0, 3, -1]])
T = linear_transformation(U, V, A, side='right')
B = matrix(QQ, 2, 4, [[-7, 4, -2,  0],
                      [ 1, 1,  8, -3]])
S = linear_transformation(U, V, B, side='right')
P = S + T
P
</input>
<output>Vector space morphism represented by the matrix:
[-8  3]
[ 7  1]
[ 2 11]
[ 5 -4]
Domain: Vector space of dimension 4 over Rational Field
Codomain: Vector space of dimension 2 over Rational Field
</output>
</sage>

<sage>
<input>Q = S*5
Q
</input>
<output>Vector space morphism represented by the matrix:
[-35   5]
[ 20   5]
[-10  40]
[  0 -15]
Domain: Vector space of dimension 4 over Rational Field
Codomain: Vector space of dimension 2 over Rational Field
</output>
</sage>

Perhaps the only surprise in all this is the necessity of writing scalar multiplication on the right of the linear transformation (rather on the left, as we do in the text).  We will recycle the linear transformation <code>T</code> from above and redefine <code>S</code> to form an example of composition.
<sage>
<input>W = QQ^3
C = matrix(QQ, [[ 4, -2],
                [-1,  3],
                [-3,  2]])
S = linear_transformation(V, W, C, side='right')
R = S*T
R
</input>
<output>Vector space morphism represented by the matrix:
[ -8   7   7]
[ 12  -3  -9]
[ 10   5  -6]
[ 22  -8 -17]
Domain: Vector space of dimension 4 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field
</output>
</sage>

We use the star symbol (<code>*</code>) to indicate composition of linear transformations.  Notice that the order of the two linear transformations we compose is important, and Sage's order agrees with the text.  The order does not have to agree, and there are good arguments to have it reversed, so be careful if you read about composition elsewhere.<br /><br />
This is a good place to expand on <acroref type="theorem" acro="VSLT" />, which says that with definitions of addition and scalar multiplication of linear transformations we then arrive at a vector space.  A vector space full of linear transformations.  Objects in Sage have <q>parents</q> <mdash /> vectors have vector spaces for parents, fractions of integers have the rationals as parents.  What is the parent of a linear transformation?  Let's see, by investigating the parent of <code>S</code> just defined above.
<sage>
<input>P = S.parent()
P
</input>
<output>Set of Morphisms (Linear Transformations) from
Vector space of dimension 2 over Rational Field to
Vector space of dimension 3 over Rational Field
</output>
</sage>

<q>Morphism</q> is a general term for a function that <q>preserves structure</q> or <q>respects operations.</q>  In Sage a collection of morphisms is referenced as a <q>homset</q> or a <q>homspace.</q>  In this example, we have a homset that is the vector space of linear transformations that go from a dimension 2 vector space over the rationals to a dimension 3 vector space over the rationals.  What can we do with it?  A few things, but not everything you might imagine.  It does have a basis, containing a few very simple linear transformations:
<sage>
<input>P.basis()
</input>
<output>(Vector space morphism represented by the matrix:
[1 0 0]
[0 0 0]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field,
Vector space morphism represented by the matrix:
[0 1 0]
[0 0 0]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field,
Vector space morphism represented by the matrix:
[0 0 1]
[0 0 0]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field,
Vector space morphism represented by the matrix:
[0 0 0]
[1 0 0]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field,
Vector space morphism represented by the matrix:
[0 0 0]
[0 1 0]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field,
Vector space morphism represented by the matrix:
[0 0 0]
[0 0 1]
Domain: Vector space of dimension 2 over Rational Field
Codomain: Vector space of dimension 3 over Rational Field)
</output>
</sage>

You can create a set of linear transformations with the <code>Hom()</code> function, simply by giving the domain and codomain.
<sage>
<input>H = Hom(QQ^6, QQ^9)
H
</input>
<output>Set of Morphisms (Linear Transformations) from
Vector space of dimension 6 over Rational Field to
Vector space of dimension 9 over Rational Field
</output>
</sage>

An understanding of Sage's homsets is not critical to understanding the use of Sage during the remainder of this course.  But such an understanding can be very useful in understanding some of Sage's more advanced and powerful features.


</sageadvice>
</subsection>

<!--   End of  lt.tex -->
<readingquestions>
<ol>
<li>Is the function below a linear transformation?  Why or why not?
<equation>
\ltdefn{T}{\complex{3}}{\complex{2}},\quad
\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{3x_1-x_2+x_3\\8x_2-6}
</equation>
</li>
<li>Determine the matrix representation of the linear transformation $S$ below.
<equation>
\ltdefn{S}{\complex{2}}{\complex{3}},\quad
\lt{S}{\colvector{x_1\\x_2}}=\colvector{3x_1+5x_2\\8x_1-3x_2\\-4x_1}
</equation>
</li>
<li><acroref type="theorem" acro="LTLC" /> has a fairly simple proof.  Yet the result itself is very powerful.  Comment on why we might say this.
</li></ol>
</readingquestions>

<exercisesubsection>

<exercise type="C" number="15" rough="matrix reps for column vector lin transf">
<problem contributor="robertbeezer">The archetypes below are all linear transformations whose domains and codomains are vector spaces of column vectors (<acroref type="definition" acro="VSCV" />).  For each one, compute the matrix representation described in the proof of <acroref type="theorem" acro="MLTCV" />.<br /><br />
<acroref type="archetype" acro="M" />,
<acroref type="archetype" acro="N" />,
<acroref type="archetype" acro="O" />,
<acroref type="archetype" acro="P" />,
<acroref type="archetype" acro="Q" />,
<acroref type="archetype" acro="R" />
</problem>
</exercise>

<exercise type="C" number="16" rough="find matrix rep R3 --> R4, given formula">
<problem contributor="chrisblack">Find the matrix representation of $\ltdefn{T}{\complex{3}}{\complex{4}}$ given by
$\lt{T}{\colvector{x\\y\\z}} = \colvector{3x + 2y + z\\ x + y + z \\ x - 3y \\2x + 3y + z }$.
</problem>
<solution contributor="chrisblack"><![CDATA[Answer: $A_T = \begin{bmatrix} 3 & 2 & 1\\ 1 & 1 & 1\\ 1 & -3  & 0\\ 2 & 3 & 1\end{bmatrix}$.]]>
</solution>
</exercise>

<exercise type="C" number="20" rough="lin trans value twice for Example MOLT">
<problem contributor="robertbeezer">Let $\vect{w}=\colvector{-3\\1\\4}$.  Referring to <acroref type="example" acro="MOLT" />, compute $\lt{S}{\vect{w}}$ two different ways.  First use the definition of $S$, then compute the matrix-vector product $C\vect{w}$ (<acroref type="definition" acro="MVP" />).
</problem>
<solution contributor="robertbeezer">In both cases the result will be $\lt{S}{\vect{w}}=\colvector{9\\2\\-9\\4}$.
</solution>
</exercise>

<exercise type="C" number="25" rough="verify a l.t.">
<problem contributor="robertbeezer">Define the linear transformation
<equation>
\ltdefn{T}{\complex{3}}{\complex{2}},\quad
\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}
</equation>
Verify that $T$ is a linear transformation.
</problem>
<solution contributor="robertbeezer">We can rewrite $T$ as follows:
<equation>
\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}
=x_1\colvector{2\\-4}+x_2\colvector{-1\\2}+x_3\colvector{5\\-10}
=\begin{bmatrix}
<![CDATA[2 & -1 & 5\\]]>
<![CDATA[-4 & 2 & -10]]>
\end{bmatrix}
\colvector{x_1\\x_2\\x_3}
</equation>
and <acroref type="theorem" acro="MBLT" /> tell us that any function of this form is a linear transformation.
</solution>
</exercise>

<exercise type="C" number="26" rough="verify a l.t.">
<problem contributor="robertbeezer">Verify that the function below is a linear transformation.
<equation>
\ltdefn{T}{P_2}{\complex{2}},\quad \lt{T}{a+bx+cx^2}=\colvector{2a-b\\b+c}
</equation>
</problem>
<solution contributor="robertbeezer">Check the two conditions of <acroref type="definition" acro="LT" />.
<alignmath>
\lt{T}{\vect{u}+\vect{v}}
<![CDATA[&=\lt{T}{\left(a+bx+cx^2\right)+\left(d+ex+fx^2\right)}\\]]>
<![CDATA[&=\lt{T}{\left(a+d\right)+\left(b+e\right)x+\left(c+f\right)x^2}\\]]>
<![CDATA[&=\colvector{2(a+d)-(b+e)\\(b+e)+(c+f)}\\]]>
<![CDATA[&=\colvector{(2a-b)+(2d-e)\\(b+c)+(e+f)}\\]]>
<![CDATA[&=\colvector{2a-b\\b+c}+\colvector{2d-e\\e+f}\\]]>
<![CDATA[&=\lt{T}{\vect{u}}+\lt{T}{\vect{v}}]]>
<intertext>and</intertext>
\lt{T}{\alpha\vect{u}}
<![CDATA[&=\lt{T}{\alpha\left(a+bx+cx^2\right)}\\]]>
<![CDATA[&=\lt{T}{\left(\alpha a\right)+\left(\alpha b\right)x+\left(\alpha c\right)x^2}\\]]>
<![CDATA[&=\colvector{2(\alpha a)-(\alpha b)\\(\alpha b)+(\alpha c)}\\]]>
<![CDATA[&=\colvector{\alpha(2a-b)\\\alpha(b+c)}\\]]>
<![CDATA[&=\alpha\colvector{2a-b\\b+c}\\]]>
<![CDATA[&=\alpha\lt{T}{\vect{u}}]]>
</alignmath>
So $T$ is indeed a linear transformation.
</solution>
</exercise>

<exercise type="C" number="30" rough="compute two pre-images, one empty, one big">
<problem contributor="robertbeezer">Define the linear transformation
<equation>
\ltdefn{T}{\complex{3}}{\complex{2}},\quad
\lt{T}{\colvector{x_1\\x_2\\x_3}}=\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}
</equation>
Compute the preimages, $\preimage{T}{\colvector{2\\3}}$ and $\preimage{T}{\colvector{4\\-8}}$.
</problem>
<solution contributor="robertbeezer">For the first pre-image, we want $\vect{x}\in\complex{3}$ such that $\lt{T}{\vect{x}}=\colvector{2\\3}$.  This becomes,
<equation>
\colvector{2x_1-x_2+5x_3\\-4x_1+2x_2-10x_3}=\colvector{2\\3}
</equation>
Vector equality gives a system of two linear equations in three variables, represented by the augmented matrix
<equation>
\begin{bmatrix}
<![CDATA[2 & -1 & 5 & 2\\]]>
<![CDATA[-4 & 2 & -10 & 3]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[\leading{1} & -\frac{1}{2} & \frac{5}{2} & 0\\]]>
<![CDATA[0 & 0 & 0 & \leading{1}]]>
\end{bmatrix}
</equation>
so the system is inconsistent and the pre-image is the empty set.  For the second pre-image the same procedure leads to an augmented matrix with a different vector of constants
<equation>
\begin{bmatrix}
<![CDATA[2 & -1 & 5 & 4\\]]>
<![CDATA[-4 & 2 & -10 & -8]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[\leading{1} & -\frac{1}{2} & \frac{5}{2} & 2\\]]>
<![CDATA[0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
This system is consistent and has infinitely many solutions, as we can see from the presence of the  two free variables ($x_2$ and $x_3$) both to zero.  We apply <acroref type="theorem" acro="VFSLS" /> to obtain
<equation>
\preimage{T}{\colvector{4\\-8}}=
\setparts{
\colvector{2\\0\\0}+
x_2\colvector{\frac{1}{2}\\1\\0}+
x_3\colvector{-\frac{5}{2}\\0\\1}
}{
x_2,\,x_3\in\complex{\null}
}
</equation>
</solution>
</exercise>

<exercise type="C" number="31" rough="compute two pre-images, none and 1-D">
<problem contributor="robertbeezer">For the linear transformation $S$ compute the pre-images.
<alignmath>
\ltdefn{S}{\complex{3}}{\complex{3}},\quad \lt{S}{\colvector{a\\b\\c}}=
\colvector{a-2b-c\\3a-b+2c\\a+b+2c }
</alignmath>
<alignmath>
<![CDATA[\preimage{S}{\colvector{-2\\5\\3}}&]]>
<![CDATA[&]]>
<![CDATA[\preimage{S}{\colvector{-5\\5\\7}}&]]>
</alignmath>
</problem>
<solution contributor="robertbeezer">We work from the definition of the pre-image, <acroref type="definition" acro="PI" />.   Setting
<equation>
\lt{S}{\colvector{a\\b\\c}}=\colvector{-2\\5\\3}
</equation>
we arrive at a system of three equations in three variables, with an augmented matrix that we row-reduce in a search for solutions,
<equation>
\begin{bmatrix}
<![CDATA[ 1 & -2 & -1 & -2 \\]]>
<![CDATA[ 3 & -1 & 2 & 5 \\]]>
<![CDATA[ 1 & 1 & 2 & 3]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[ \leading{1} & 0 & 1 & 0 \\]]>
<![CDATA[ 0 & \leading{1} & 1 & 0 \\]]>
<![CDATA[ 0 & 0 & 0 & \leading{1}]]>
\end{bmatrix}
</equation>
With a leading 1 in the last column, this system is inconsistent (<acroref type="theorem" acro="RCLS" />), and there are no values of $a$, $b$ and $c$ that will create an element of the pre-image.  So the preimage is the empty set.<br /><br />
We work from the definition of the pre-image, <acroref type="definition" acro="PI" />.   Setting
<equation>
\lt{S}{\colvector{a\\b\\c}}=\colvector{-5\\5\\7}
</equation>
we arrive at a system of three equations in three variables, with an augmented matrix that we row-reduce in a search for solutions,
<equation>
\begin{bmatrix}
<![CDATA[ 1 & -2 & -1 & -5 \\]]>
<![CDATA[ 3 & -1 & 2 & 5 \\]]>
<![CDATA[ 1 & 1 & 2 & 7]]>
\end{bmatrix}
\rref
\begin{bmatrix}
<![CDATA[ \leading{1} & 0 & 1 & 3 \\]]>
<![CDATA[ 0 & \leading{1} & 1 & 4 \\]]>
<![CDATA[ 0 & 0 & 0 & 0]]>
\end{bmatrix}
</equation>
The solution set to this system, which is also the desired pre-image, can be expressed using the vector form of the solutions (<acroref type="theorem" acro="VFSLS" />)
<equation>
\preimage{S}{\colvector{-5\\5\\7}}
=\setparts{\colvector{3\\4\\0}+c\colvector{-1\\-1\\1}}{c\in\complex{}}
=\colvector{3\\4\\0}+\spn{\set{\colvector{-1\\-1\\1}}}
</equation>
Does the final expression for this set remind you of <acroref type="theorem" acro="KPI" />?
</solution>
</exercise>

<exercise type="C" number="40" rough="Given T on a basis, find T(v)">
<problem contributor="chrisblack">If $\ltdefn{T}{\complex{2}}{\complex{2}}$ satisfies
$\lt{T}{\colvector{2\\1}} = \colvector{3\\4}$
and
$\lt{T}{\colvector{1\\1}} = \colvector{-1\\2}$,
find
$\lt{T}{\colvector{4\\3}}$.
</problem>
<solution contributor="chrisblack">Since $\colvector{4\\3}= \colvector{2\\1} + 2\colvector{1\\1}$, we have
<alignmath>
T\left(\colvector{4\\3}\right)
<![CDATA[&= \lt{T}{ \colvector{2\\1} + 2\colvector{1\\1}}]]>
= \lt{T}{\colvector{2\\1}} + 2\,\lt{T}{\colvector{1\\1}}\\
<![CDATA[&= \colvector{3\\4} + 2\colvector{-1\\2}]]>
<![CDATA[&= \colvector{1\\8}.]]>
</alignmath>
</solution>
</exercise>

<exercise type="C" number="41" rough="Find matrix rep given T on a basis">
<problem contributor="chrisblack">If $\ltdefn{T}{\complex{2}}{\complex{3}}$ satisfies
$\lt{T}{\colvector{2\\3}} = \colvector{2\\2\\1}$
and
$\lt{T}{\colvector{3\\4}} = \colvector{-1\\0\\2}$,
find the matrix representation of $T$.
</problem>
<solution contributor="chrisblack">First, we need to write the standard basis vectors $\vect{e}_1$ and $\vect{e}_2$ as linear combinations of $\colvector{2\\3}$ and $\colvector{3\\4}$.  Starting with $\vect{e}_1$, we see that
$\vect{e}_1 = -4\colvector{2\\3} + 3\colvector{3\\4}$, so we have
<alignmath>
\lt{T}{\vect{e}_1}
<![CDATA[&= \lt{T}{-4\colvector{2\\3} + 3\colvector{3\\4}}]]>
= -4\,\lt{T}{\colvector{2\\3}} + 3 \,\lt{T}{\colvector{3\\4}}\\
<![CDATA[&= -4\colvector{2\\2\\1} + 3\colvector{-1\\0\\2}]]>
= \colvector{-11\\-8\\2}.
</alignmath>
Repeating the process for $\vect{e}_2$,  we have
$\vect{e}_2 = 3\colvector{2\\3} - 2\colvector{3\\4}$, and we then see that
<alignmath>
\lt{T}{\vect{e}_2}
<![CDATA[&= \lt{T}{3\colvector{2\\3} -2 \colvector{3\\4}}]]>
= 3\,\lt{T}{\colvector{2\\3}} - 2 \,\lt{T}{\colvector{3\\4}}\\
<![CDATA[&= 3\colvector{2\\2\\1} -2 \colvector{-1\\0\\2}]]>
= \colvector{8\\6\\-1}.
</alignmath>
Thus, the matrix representation of $T$ is
<![CDATA[$A_T = \begin{bmatrix} -11 & 8\\ -8 & 6\\2 & -1 \end{bmatrix}$.]]>
</solution>
</exercise>

<exercise type="C" number="42" rough="Find preimage T: M22 --> R">
<problem contributor="chrisblack">Define $\ltdefn{T}{M_{2,2}}{\real{\null}}$ by
<![CDATA[$\lt{T}{\begin{bmatrix} a & b \\ c & d \end{bmatrix}} = a + b + c - d$.]]>
Find the pre-image $\preimage{T}{3}$.
</problem>
<solution contributor="chrisblack">The preimage $\preimage{T}{3}$ is the set of all matrices
<![CDATA[$\begin{bmatrix} a & b\\c & d\end{bmatrix}$]]>
so that
<![CDATA[$\lt{T}{\begin{bmatrix}a & b \\ c & d \end{bmatrix}} = 3$.]]>
A matrix
<![CDATA[$\begin{bmatrix} a & b\\ c & d \end{bmatrix}$]]>
is in the preimage if
$a + b + c - d = 3$,
<ie /> $d = a + b + c - 3$.
This is the set. (But the set is <em>not</em> a vector space. Why not?)
<alignmath>
<![CDATA[\preimage{T}{3} &=]]>
<![CDATA[\setparts{\begin{bmatrix} a & b \\ c & a + b + c - 3\end{bmatrix}}{a,b,c\in\complexes}]]>
</alignmath>
</solution>
</exercise>

<exercise type="C" number="43" rough="Find preimage of 0 for derivative T:P3 --> P2">
<problem contributor="chrisblack">Define $\ltdefn{T}{P_3}{P_2}$ by
$\lt{T}{a + bx + cx^2 + dx^3} = b + 2cx + 3dx^2$.
Find the pre-image of $\mathbf{0}$.
Does this linear transformation seem familiar?
</problem>
<solution contributor="chrisblack">The preimage $\preimage{T}{0}$ is the set of all polynomials $a + bx + cx^2 + dx^3$ so that
$\lt{T}{a + bx + cx^2 + dx^3} = 0$.  Thus, $b + 2cx + 3dx^2 = 0$, where the $0$ represents the zero polynomial.  In order to satisfy this equation, we must have $b = 0$, $c = 0$, and $d = 0$.  Thus, $\preimage{T}{0}$ is precisely the set of all constant polynomials <ndash /> polynomials of degree 0. Symbolically, this is $\preimage{T}{0} = \setparts{a}{a\in\complexes}$.<br /><br />
Does this seem familiar?  What other operation sends constant functions to 0?
</solution>
</exercise>

<exercise type="M" number="10" rough="discover matrix rep of composition as product">
<problem contributor="robertbeezer">Define two linear transformations, $\ltdefn{T}{\complex{4}}{\complex{3}}$ and $\ltdefn{S}{\complex{3}}{\complex{2}}$ by
<alignmath>
\lt{S}{\colvector{x_1\\x_2\\x_3}}
<![CDATA[&=]]>
\colvector{
x_1-2x_2+3x_3\\
5x_1+4x_2+2x_3
}
<![CDATA[&]]>
\lt{T}{\colvector{x_1\\x_2\\x_3\\x_4}}
<![CDATA[&=]]>
\colvector{
-x_1+3x_2+x_3+9x_4\\
2x_1+x_3+7x_4\\
4x_1+2x_2+x_3+2x_4
}
</alignmath>
Using the proof of <acroref type="theorem" acro="MLTCV" /> compute the matrix representations of the three linear transformations $T$, $S$ and $\compose{S}{T}$.  Discover and comment on the relationship between these three matrices.
</problem>
<solution contributor="robertbeezer"><equation>
\begin{bmatrix}
<![CDATA[1 & -2 & 3\\]]>
<![CDATA[5 & 4 & 2]]>
\end{bmatrix}
\begin{bmatrix}
<![CDATA[-1 & 3 & 1 & 9 \\]]>
<![CDATA[ 2 & 0 & 1 & 7 \\]]>
<![CDATA[ 4 & 2 & 1 & 2]]>
\end{bmatrix}
=
\begin{bmatrix}
<![CDATA[ 7 & 9 & 2 & 1 \\]]>
<![CDATA[ 11 & 19 & 11 & 77]]>
\end{bmatrix}
</equation>
</solution>
</exercise>

<exercise type="M" number="60" rough="zero linear transformation">
<problem contributor="robertbeezer">Suppose $U$ and $V$ are vector spaces and define a function $\ltdefn{Z}{U}{V}$ by $\lt{T}{\vect{u}}=\zerovector_{V}$ for every $\vect{u}\in U$.  Prove that $Z$ is a (stupid) linear transformation.  (See <acroref type="exercise" acro="ILT.M60" />, <acroref type="exercise" acro="SLT.M60" />, <acroref type="exercise" acro="IVLT.M60" />.)
</problem>
</exercise>

<exercise type="T" number="20" rough="Alternate deefinition of linear transformation">
<problem contributor="robertbeezer">Use the conclusion of <acroref type="theorem" acro="LTLC" /> to motivate a new definition of a linear transformation.  Then prove that your new definition is equivalent to <acroref type="definition" acro="LT" />.  (<acroref type="technique" acro="D" /> and <acroref type="technique" acro="E" /> might be helpful if you are not sure what you are being asked to prove here.)
</problem>
</exercise>

<exercisegroup>
<p><acroref type="theorem" acro="SER" /> established three properties of matrix similarity that are collectively known as the defining properties of an <q>equivalence relation</q>.  Exercises T30 and T31 extend this idea to linear transformations.</p>

<exercise type="T" number="30" rough="Linear transformation equivalence relation">
<problem contributor="robertbeezer">Suppose that $\ltdefn{T}{U}{V}$ is a linear transformation.  Say that two vectors from $U$, $\vect{x}$ and $\vect{y}$, are <define>related</define> exactly when $\lt{T}{\vect{x}}=\lt{T}{\vect{y}}$ in $V$.  Prove the three properties of an equivalence relation on $U$: (a) for any $\vect{x}\in U$, $\vect{x}$ is related to $\vect{x}$, (b) if $\vect{x}$ is related to $\vect{y}$, then $\vect{y}$ is related to $\vect{x}$, and (c) if $\vect{x}$ is related to $\vect{y}$ and $\vect{y}$ is related to $\vect{z}$, then $\vect{x}$ is related to $\vect{z}$.
</problem>
</exercise>

<exercise type="T" number="31" rough="Linear transformation equivalence relation partition">
<problem contributor="robertbeezer">Equivalence relations always create a partition of the set they are defined on, via a construction called equivalence classes.  For the relation in the previous problem, the equivalence classes are the pre-images.  Prove directly that the collection of pre-images partition $U$ by showing that (a) every $\vect{x}\in U$ is contained in some pre-image, and that (b) any two different pre-images do not have any elements in common.
</problem>
<solution contributor="robertbeezer">
Choose  $\vect{x}\in U$, then $\lt{T}{\vect{x}}\in V$ and we can form $\preimage{T}{\lt{T}{\vect{x}}}$.  Almost trivially, $\vect{x}\in\preimage{T}{\lt{T}{\vect{x}}}$, so every vector in $U$ is in <em>some</em> preimage.  For (b), suppose that $\preimage{T}{\vect{v}_1}$ and $\preimage{T}{\vect{v}_2}$ are two <em>different</em> preimages, and the vector $\vect{u}\in U$ is an element of both.  Then $\lt{T}{\vect{u}}=\vect{v}_1$ and $\lt{T}{\vect{u}}=\vect{v}_2$.  But because $T$ is a function, we conclude that $\vect{v}_1=\vect{v}_2$.  It then follows that $\preimage{T}{\vect{v}_1}=\preimage{T}{\vect{v}_2}$, contrary to our assumption that they were different.  So there cannot be a common element $\vect{u}$.
</solution>
</exercise>

</exercisegroup>

</exercisesubsection>

</section>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.