gltut / Documents / Texturing / Tutorial 17.xml

   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
<?xml version="1.0" encoding="UTF-8"?>
<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
    <?dbhtml filename="Tutorial 17.html" ?>
    <title>Spotlight on Textures</title>
    <para>Previously, we have seen textures used to vary surface parameters. But we can use textures
        to vary something else: light intensity. In this way, we can simulate light sources who's
        intensity changes with something more than just distance from the light.</para>
    <para>Our first effort in varying light intensity with textures will be to build an incandescent
        flashlight. The light beam from a flashlight is not a single solid intensity, due to the way
        the mirrors focus the light. A texture is the simplest way to define this pattern of light
        intensity.</para>
    <section>
        <?dbhtml filename="Tut17 Post Projection Space.html" ?>
        <title>Post-Projection Space</title>
        <para>Before we can look at how to use a texture to make a flashlight, we need to make a
            short digression. Perspective projection will be an important part of how we make a
            texture into a flashlight, so we need to revisit perspective projection. Specifically,
            we need to look at what happens when transforming after a perspective projection
            operation.</para>
        <para>Open up the project called <phrase role="propername">Double Projection</phrase>. It
            renders four objects, using various textures, in a scene with a single directional light
            and a green point light.</para>
        <figure>
            <title>Double Projection</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="DoubleProjection.png"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>This tutorial displays two images of the same scene. The image on the left is the view
            of the scene from one camera, while the image on the right is the view of the scene from
            another camera. The difference between the two cameras is mainly in where the camera
            transformation matrices are applied.</para>
        <para>The left camera works normally. It is controlled by the left mouse button, the mouse
            wheel, and the WASD keys, as normal. The right camera however provides the view
            direction that is applied after the perspective projection matrix. The sequence of
            transforms thus looks like this: Model -> Left Camera -> Projection -> Right Camera. The
            right camera is controlled by the right mouse button; only orientation controls work on
            it.</para>
        <para>The idea is to be able to look at the shape of objects in normalized device coordinate
                (<acronym>NDC</acronym>) space after a perspective projection. NDC space is a [-1,
            1] box centered at the origin; by rotating objects in NDC space, you will be able to see
            what those objects look like from a different perspective. Pressing the
                <keycap>SpaceBar</keycap> will reset the right camera back to a neutral view.</para>
        <para>Note that post-perspective projection space objects are very distorted, particularly
            in the Z direction. Also, recall one of the fundamental tricks of the perspective
            projection: it rescales objects based on their Z-distance from the camera. Thus, objects
            that are farther away are physically smaller in NDC space than closer ones. Thus,
            rotating NDC space around will produce results that are not intuitively what you might
            expect and may be very disorienting at first.</para>
        <para>For example, if we rotate the right camera to an above view, relative to whatever the
            left camera is, we see that all of the objects seems to shrink down into a very small
            width.</para>
        <figure>
            <title>Top View Projection</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="ProjectTopView.png"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>This is due to the particulars of the perspective projection's work on the Z
            coordinate. The Z coordinate in NDC space is the result of the clip-space Z divided by
            the negative of the camera-space Z. This forces it into the [-1, 1] range, but the
            clip-space Z also is affected by the zNear and zFar of the perspective matrix. The wider
            these are, the more narrowly the Z is compressed. Objects farther from the camera are
            compressed into smaller ranges of Z; we saw this in our look at the effect of the camera
            Z-range on precision. Close objects use more of the [-1, 1] range than those farther
            away.</para>
        <para>This can be seen by moving the left camera close to an object. The right camera, from
            a top-down view, has a much thicker view of that object in the Z direction.</para>
        <figure>
            <title>Near View Projection</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="ProjectCloseObject.png"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>Pressing the <keycap>Y</keycap> key will toggle depth clamping in the right camera.
            This can explain some of the unusual things that will be seen there. Sometimes the wrong
            objects will appear on top of each other; when this happens, it is almost always due to
            a clamped depth.</para>
        <para>The reason why depth clamping matters so much in the right screen is obvious if you
            think about it. NDC space is a [-1, 1] square. But that is not the NDC space we actually
            render to. We are rendering to a rotated portion of this space. So the actual [-1, 1]
            space that gets clipped or clamped is different from the one we see. We are effectively
            rotating a square and cutting off any parts of it that happen to be outside of the
            square viewing area. This is easy to see in the X and Y directions, but in Z, it results
            in some unusual views.</para>
        <section>
            <title>Scene Graphs</title>
            <para>This is the first code in the tutorial to use the scene graph part of the
                framework. The term <glossterm>scene graph</glossterm> refers to a piece of software
                that manages a collection of objects, typically in some kind of object hierarchy. In
                this case, the <filename>Scene.h</filename> part of the framework contains a class
                that loads an XML description of a scene. This description includes meshes, shaders,
                and textures to load. These assets are then associated with named objects within the
                scene. So a mesh combined with a shader can be rendered with one or more
                textures.</para>
            <para>The purpose of this system is to remove a <emphasis>lot</emphasis> of the
                boilerplate code from the tutorial files. The setup work for the scene graph is far
                less complicated than the setup work seen in previous tutorials.</para>
            <para>As an example, here is the scene graph to load and link a particular
                shader:</para>
            <example>
                <title>Scene Graph Shader Definition</title>
                <programlisting language="xml">&lt;prog
    xml:id="p_unlit"
    vert="Unlit.vert"
    frag="Unlit.frag"
    model-to-camera="modelToCameraMatrix">
    &lt;block name="Projection" binding="0"/>
&lt;/prog></programlisting>
            </example>
            <para>The <literal>xml:id</literal> gives it a name; this is used by objects in the
                scene to refer to this program. It also provides a way for other code to talk to it.
                Most of the rest is self-explanatory. <literal>model-to-camera</literal> deserves
                some explanation.</para>
            <para>Rendering the scene graph is done by calling the scene graph's render function
                with a camera matrix. Since the objects in the scene graph store their own
                transformations, the scene graph combines each object's local transform with the
                given camera matrix. But it still needs to know how to provide that matrix to the
                shader. Thus, <literal>model-to-camera</literal> specifies the name of the
                    <type>mat4</type> uniform that receives the model-to-camera transformation
                matrix. There is a similar matrix for normals that is given the inverse-transpose of
                the model-to-camera matrix.</para>
            <para>The <literal>block</literal> element is the way we associate a uniform block in
                the program with a uniform block binding point. There is a similar element for
                    <literal>sampler</literal> that specifies which texture unit that a particular
                GLSL sampler is bound to.</para>
            <para>Objects in scene graph systems are traditionally called <quote>nodes,</quote> and
                this scene graph is no exception.</para>
            <example>
                <title>Scene Graph Node Definition</title>
                <programlisting language="xml">&lt;node
    name="spinBar"
    mesh="m_longBar"
    prog="p_lit"
    pos="-7 0 8"
    orient="-0.148446 0.554035 0.212003 0.791242"
    scale="4">
    &lt;texture name="t_stone_pillar" unit="0" sampler="anisotropic"/>
&lt;/node></programlisting>
            </example>
            <para>Nodes have a number of properties. They have a name, so that other code can
                reference them. They have a mesh that they render and a program they use to render
                that mesh. They have a position, orientation, and scale transform. The orientation
                is specified as a quaternion, with the W component specified last (this is different
                from how <type>glm::fquat</type> specifies it. The W there comes first). The order
                of these transforms is scale, then orientation, then translation.</para>
            <para>This node also has a texture bound to it. <literal>t_stone_pillar</literal> was a
                texture that was loaded in a <literal>texture</literal> command. The
                    <literal>unit</literal> property specifies the texture unit to use. And the
                    <literal>sampler</literal> property defines which of the predefined samplers to
                use. In this case, it uses a sampler with ansiotropic filtering to the maximum
                degree allowed by the hardware. The texture wrapping modes of this sampler are to
                wrap the S and T coordinates.</para>
            <para>This is what the C++ setup code looks like for the entire scene:</para>
            <example>
                <title>Double Projection LoadAndSetupScene</title>
                <programlisting language="cpp">std::auto_ptr&lt;Framework::Scene> pScene(new Framework::Scene("dp_scene.xml"));

std::vector&lt;Framework::NodeRef> nodes;
nodes.push_back(pScene->FindNode("cube"));
nodes.push_back(pScene->FindNode("rightBar"));
nodes.push_back(pScene->FindNode("leaningBar"));
nodes.push_back(pScene->FindNode("spinBar"));

AssociateUniformWithNodes(nodes, g_lightNumBinder, "numberOfLights");
SetStateBinderWithNodes(nodes, g_lightNumBinder);

GLuint unlit = pScene->FindProgram("p_unlit");
Framework::Mesh *pSphereMesh = pScene->FindMesh("m_sphere");

//No more things that can throw.
g_spinBarOrient = nodes[3].NodeGetOrient();
g_unlitProg = unlit;
g_unlitModelToCameraMatrixUnif = glGetUniformLocation(unlit, "modelToCameraMatrix");
g_unlitObjectColorUnif = glGetUniformLocation(unlit, "objectColor");

std::swap(nodes, g_nodes);
nodes.clear();	//If something was there already, delete it.

std::swap(pSphereMesh, g_pSphereMesh);

Framework::Scene *pOldScene = g_pScene;
g_pScene = pScene.release();
pScene.reset(pOldScene);	//If something was there already, delete it.</programlisting>
            </example>
            <para>This code does some fairly simple things. The scene graph system is good, but we
                still need to be able to control uniforms not in blocks manually from external code.
                Specifically in this case, the number of lights is a uniform, not a uniform block.
                To do this, we need to use a uniform state binder,
                    <varname>g_lightNumBinder</varname>, and set it into all of the nodes in the
                scene. This binder allows us to set the uniform for all of the objects (regardless
                of which program they use).</para>
            <para>The <literal>p_unlit</literal> shader is never actually used in the scene graph;
                we just use the scene graph as a convenient way to load the shader. Similarly, the
                    <literal>m_sphere</literal> mesh is not used in a scene graph node. We pull
                references to both of these out of the graph and use them ourselves where needed. We
                extract some uniform locations from the unlit shader, so that we can draw unlit
                objects with colors.</para>
            <note>
                <para>The code as written here is designed to be exception safe. Most of the
                    functions that find nodes by name will throw if the name is not found. What this
                    exception safety means is that it is easy to make the scene reloadable. It only
                    replaces the old values in the global variables after executing all of the code
                    that could throw an exception. This way, the entire scene, along with all
                    meshes, textures, and shaders, can be reloaded by pressing
                        <keycap>Enter</keycap>. If something goes wrong, the new scene will not be
                    loaded and an error message is displayed.</para>
            </note>
            <para>Two of the objects in the scene rotate. This is easily handled using our list of
                objects. In the <function>display</function> method, we access certain nodes and
                change their transforms them:</para>
            <programlisting language="cpp">g_nodes[0].NodeSetOrient(glm::rotate(glm::fquat(),
    360.0f * g_timer.GetAlpha(), glm::vec3(0.0f, 1.0f, 0.0f)));

g_nodes[3].NodeSetOrient(g_spinBarOrient * glm::rotate(glm::fquat(),
    360.0f * g_timer.GetAlpha(), glm::vec3(0.0f, 0.0f, 1.0f)));</programlisting>
            <para>We simply set the orientation based on a timer. For the second one, we previously
                stored the object's orientation after loading it, and use that as the reference.
                This allows us to rotate about its local Z axis.</para>
        </section>
        <section>
            <title>Multiple Scenes</title>
            <para>The split-screen trick used here is actually quite simple to pull off. It's also
                one of the advantages of the scene graph: the ability to easily re-render the same
                scene multiple times.</para>
            <para>The first thing that must change is that the projection matrix cannot be set in
                the old <function>reshape</function> function. That function now only sets the new
                width and height of the screen into global variables. This is important because we
                will be using two projection matrices.</para>
            <para>The projection matrix used for the left scene is set up like this:</para>
            <example>
                <title>Left Projection Matrix</title>
                <programlisting language="cpp">glm::ivec2 displaySize(g_displayWidth / 2, g_displayHeight);

{
    glutil::MatrixStack persMatrix;
    persMatrix.Perspective(60.0f, (displaySize.x / (float)displaySize.y), g_fzNear, g_fzFar);
    
    ProjectionBlock projData;
    projData.cameraToClipMatrix = persMatrix.Top();
    
    glBindBuffer(GL_UNIFORM_BUFFER, g_projectionUniformBuffer);
    glBufferData(GL_UNIFORM_BUFFER, sizeof(ProjectionBlock), &amp;projData, GL_STREAM_DRAW);
    glBindBuffer(GL_UNIFORM_BUFFER, 0);
}

glViewport(0, 0, (GLsizei)displaySize.x, (GLsizei)displaySize.y);
g_pScene->Render(modelMatrix.Top());</programlisting>
            </example>
            <para>Notice that <varname>displaySize</varname> uses only half of the width. And this
                half width is passed into the <function>glViewport</function> call. It is also used
                to generate the aspect ratio for the perspective projection matrix. It is the
                    <function>glViewport</function> function that causes our window to be split into
                two halves.</para>
            <para>What is more interesting is the right projection matrix computation:</para>
            <example>
                <title>Right Projection Matrix</title>
                <programlisting language="cpp">{
    glutil::MatrixStack persMatrix;
    persMatrix.ApplyMatrix(glm::mat4(glm::mat3(g_persViewPole.CalcMatrix())));
    persMatrix.Perspective(60.0f, (displaySize.x / (float)displaySize.y), g_fzNear, g_fzFar);
    
    ProjectionBlock projData;
    projData.cameraToClipMatrix = persMatrix.Top();
    
    glBindBuffer(GL_UNIFORM_BUFFER, g_projectionUniformBuffer);
    glBufferData(GL_UNIFORM_BUFFER, sizeof(ProjectionBlock), &amp;projData, GL_STREAM_DRAW);
    glBindBuffer(GL_UNIFORM_BUFFER, 0);
}

if(!g_bDepthClampProj)
    glDisable(GL_DEPTH_CLAMP);
glViewport(displaySize.x + (g_displayWidth % 2), 0,
    (GLsizei)displaySize.x, (GLsizei)displaySize.y);
g_pScene->Render(modelMatrix.Top());
glEnable(GL_DEPTH_CLAMP);</programlisting>
            </example>
            <para>Notice that we first take the camera matrix from the perspective view and apply it
                to the matrix stack before the perspective projection itself. Remember that
                transforms applied to the stack happen in <emphasis>reverse</emphasis> order. This
                means that vertices are projected into 4D homogeneous clip-space coordinates, then
                are transformed by a matrix. Only the rotation portion of the right camera matrix is
                used. The translation is removed by the conversion to a <type>mat3</type> (which
                takes only the top-left 3x3 part of the matrix, which if you recall contains the
                rotation), then turns it back into a <type>mat4</type>.</para>
            <para>Notice also that the viewport's X location is biased based on whether the
                display's width is odd or not (<literal>g_displayWidth % 2</literal> is 0 if it is
                even, 1 if it is odd). This means that if the width is stretched to an odd number,
                there will be a one pixel gap between the two views of the scene.</para>
        </section>
        <section>
            <title>Intermediate Projection</title>
            <para>One question may occur to you: how is it possible for our right camera to provide
                a rotation in NDC space, if it is being applied to the end of the projection matrix?
                After all, the projection matrix goes from camera space to
                <emphasis>clip</emphasis>-space. The clip-space to NDC space transform is done by
                OpenGL after our vertex shader has done this matrix multiply. Do we not need the
                shader to divide the clip-space values by W, then do the rotation?</para>
            <para>Obviously not, since this code works. But just because code happens to work
                doesn't mean that it should. So let's see if we can prove that it does. To do this,
                we must prove this:</para>
            <informalequation>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="PostProjectTransform.svg"/>
                    </imageobject>
                </mediaobject>
            </informalequation>
            <para>This might look like a simple proof by inspection due to the associative nature of
                these, but it is not. The reason is quite simple: w and w' may not be the same. The
                value of w is the fourth component of v; w' is the fourth component of what results
                from T*v. If T changes w, then the equation is not true. But at the same time, if T
                doesn't change w, if w == w', then the equation is true.</para>
            <para>Well, that makes things quite simple. We simply need to ensure that our T does not
                alter w. Matrix multiplication tells us that w' is the dot product of V and the
                bottom row of T.</para>
            <informalequation>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="PostProjectTransform_2.svg"/>
                    </imageobject>
                </mediaobject>
            </informalequation>
            <para>Therefore, if the bottom row of T is (0, 0, 0, 1), then w == w'. And therefore, we
                can use T before the division. Fortunately, the only matrix we have that has a
                different bottom row is the projection matrix, and T is the rotation matrix we apply
                after projection.</para>
            <para>So this works, as long as we use the right matrices. We can rotate, translate, and
                scale post-projective clip-space exactly as we would post-projective NDC space.
                Which is good, because we get to preserve the w component for perspective-correct
                interpolation.</para>
            <para>The take-home lesson here is very simple: projections are not that special as far
                as transforms are concerned. Post-projective space is mostly just another space. It
                may be a 4-dimensional homogeneous coordinate system, and that may be an odd thing
                to fully understand. But that does not mean that you can't apply a regular matrix to
                objects in this space.</para>
        </section>
    </section>
    <section>
        <?dbhtml filename="Tut17 Projective Texture.html" ?>
        <title>Projective Texture</title>
        <para>In order to create our flashlight effect, we need to do something called
                <glossterm>projective texturing</glossterm>. Projective texturing is a special form
            of texture mapping. It is a way of generating texture coordinates for a texture, such
            that it appears that the texture is being projected onto a scene, in much the same way
            that a film projector projects light. Therefore, we need to do two things: implement
            projective texturing, and then use the value we sample from the projected texture as the
            light intensity.</para>
        <para>The key to understanding projected texturing is to think backwards, compared to the
            visual effect we are trying to achieve. We want to take a 2D texture and make it look
            like it is projected onto the scene. To do this, we therefore do the opposite: we
            project the <emphasis>scene</emphasis> onto the 2D texture. We want to take the vertex
            positions of every object in the scene and project them into the space of the
            texture.</para>
        <para>Since this is a perspective projection operation, and it involves transforming vertex
            positions, naturally we need a matrix. This is math we already know: we have vertex
            positions in model space. We transform them to a camera space, one that is different
            from the one we use to view the scene. Then we use a perspective projection matrix to
            transform them to clip-space; both the matrix and this clip-space are again different
            spaces from what we use to render the scene. Once perspective divide later, and we're
            done.</para>
        <para>That last part is the small stumbling block. See, after the perspective divide, the
            visible world, the part of the world that is projected onto the texture, lives in a [-1,
            1] sized cube. That is the size of NDC space, though it is again a different NDC space
            from the one we use to render. The problem is that the range of the texture coordinates,
            the space of the 2D texture itself, is [0, 1].</para>
        <para>This is why we needed the prior discussion of post-projective transforms. Because we
            need to do a post-projective transform here: we have to transform the XY coordinates of
            the projected position from [-1, 1] to [0, 1] space. And again, we do not want to have
            to perform the perspective divide ourselves; OpenGL has special functions for texture
            accesses with a divide. Therefore, we encode the translation and scale as a
            post-projective transformation. As previously demonstrated, this is mathematically
            identical to doing the transform after the division.</para>
        <para>This entire process represents a new kind of light. We have seen directional lights,
            which are represented by a light intensity coming from a single direction. And we have
            seen point lights, which are represented by a position in the world which casts light in
            all directions. What we are defining now is typically called a
                <glossterm>spotlight</glossterm>: a light that has a position, direction, and
            oftentimes a few other fields that limit the size and nature of the spot effect.
            Spotlights cast light on a cone-shaped area.</para>
        <para>We implement spotlights via projected textures in the <phrase role="propername"
                >Projected Light</phrase> project. This tutorial uses a similar scene to the one
            before, though with slightly different numbers for lighting. The main difference, scene
            wise, is the addition of a textured background box.</para>
        <figure>
            <title>Projected Light</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="Projected%20Light.png"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>The camera controls work the same way as before. The projected flashlight, represented
            by the red, green, and blue axes, is moved with the IJKL keyboard keys, with O and U
            moving up and down, respectively. The right mouse button rotates the flashlight around;
            the blue line points in the direction of the light. The flashlight's position and
            orientation are built around the camera controls, so it rotates around a point in front
            of the flashlight. It translates relative to its current facing as well. As usual,
            holding down the <keycap>Shift</keycap> key will cause the flashlight to move more
            slowly.</para>
        <para>Pressing the <keycap>G</keycap> key will toggle all of the regular lighting on and
            off. This makes it easier to see just the light from our projected texture.</para>
        <section>
            <title>Flashing the Light</title>
            <para>Let us first look at how we achieve the projected texture effect. We want to take
                the model space positions of the vertices and project them onto the texture.
                However, there is one minor problem: the scene graph system provides a transform
                from model space into the visible camera space. We need a transform to our special
                projected texture camera space, which has a different position and
                orientation.</para>
            <para>We resolve this by being clever. We already have positions in the viewing camera
                space. So we simply start there and construct a matrix from view camera space into
                our texture camera space.</para>
            <example>
                <title>View Camera to Projected Texture Transform</title>
                <programlisting language="cpp">glutil::MatrixStack lightProjStack;
//Texture-space transform
lightProjStack.Translate(0.5f, 0.5f, 0.0f);
lightProjStack.Scale(0.5f, 0.5f, 1.0f);
//Project. Z-range is irrelevant.
lightProjStack.Perspective(g_lightFOVs[g_currFOVIndex], 1.0f, 1.0f, 100.0f);
//Transform from main camera space to light camera space.
lightProjStack.ApplyMatrix(lightView);
lightProjStack.ApplyMatrix(glm::inverse(cameraMatrix));

g_lightProjMatBinder.SetValue(lightProjStack.Top());</programlisting>
            </example>
            <para>Reading the modifications to <varname>lightProjStack</varname> in bottom-to-top
                order, we begin by using the inverse of the view camera matrix. This transforms all
                of our vertex positions back to world space, since the view camera matrix is a
                world-to-camera matrix. We then apply the world-to-texture-camera matrix. This is
                followed by a projection matrix, which uses an aspect ratio of 1.0. The last two
                transforms move us from [-1, 1] NDC space to the [0, 1] texture space.</para>
            <para>The zNear and zFar for the projection matrix are almost entirely irrelevant. They
                need to be within the allowed ranges (strictly greater than 0, and zFar must be
                larger than zNear), but the values themselves are meaningless. We will discard the Z
                coordinate entirely later on.</para>
            <para>We use a matrix uniform binder to associate that transformation matrix with all of
                the objects in the scene. This is all we need to do to set up the projection, as far
                as the matrix math is concerned.</para>
            <para>Our vertex shader (<filename>projLight.vert</filename>) takes care of things in
                the obvious way:</para>
            <programlisting language="glsl">lightProjPosition = cameraToLightProjMatrix * vec4(cameraSpacePosition, 1.0);</programlisting>
            <para>Note that this line is part of the vertex shader;
                    <varname>lightProjPosition</varname> is passed to the fragment shader. One might
                think that the projection would work best in the fragment shader, but doing it
                per-vertex is actually just fine. The only time one would need to do the projection
                per-fragment would be if one was using imposters or was otherwise modifying the
                depth of the fragment. Indeed, because it works per-vertex, projected textures were
                a preferred way of doing cheap lighting in many situations.</para>
            <para>In the fragment shader, <filename>projLight.frag</filename>, we want to use the
                projected texture as a light. We have the <function>ComputeLighting</function>
                function in this shader from prior tutorials. All we need to do is make our
                projected light appear to be a regular light.</para>
            <programlisting language="glsl">PerLight currLight;
currLight.cameraSpaceLightPos = vec4(cameraSpaceProjLightPos, 1.0);
currLight.lightIntensity =
    textureProj(lightProjTex, lightProjPosition.xyw) * 4.0;

currLight.lightIntensity = lightProjPosition.w > 0 ?
	currLight.lightIntensity : vec4(0.0);</programlisting>
            <para>We create a simple structure that we fill in. Later, we pass this structure to
                    <function>ComputeLighting</function>, and it does the usual thing.</para>
            <para>The view camera space position of the projected light is passed in as a uniform.
                It is necessary for our flashlight to properly obey attenuation, as well as to find
                the direction towards the light.</para>
            <para>The next line is where we do the actual texture projection. The
                    <function>textureProj</function> is a texture accessing function that does
                projective texturing. Even though <varname>lightProjTex</varname> is a
                    <type>sampler2D</type> (for 2D textures), the texture coordinate has three
                dimensions. All forms of <function>textureProj</function> take one extra texture
                coordinate compared to the regular <function>texture</function> function. This extra
                texture coordinate is divided into the previous one before being used to access the
                texture. Thus, it performs the perspective divide for us.</para>
            <note>
                <para>Mathematically, there is virtually no difference between using
                        <function>textureProj</function> and doing the divide ourselves and calling
                        <function>texture</function> with the results. While there may not be a
                    mathematical difference, there very well may be a performance difference. There
                    may be specialized hardware that does the division much faster than the
                    general-purpose opcodes in the shader. Then again, there may not. However, using
                        <function>textureProj</function> will certainly be no slower than
                        <function>texture</function> in the general case, so it's still a good
                    idea.</para>
            </note>
            <para>Notice that the value pulled from the texture is scaled by 4.0. This is done
                because the color values stored in the texture are clamped to the [0, 1] range. To
                bring it up to our high dynamic range, we need to scale the intensity
                appropriately.</para>
            <para>The texture being projected is bound to a known texture unit globally; the scene
                graph already associates the projective shader with that texture unit. So there is
                no need to do any special work in the scene graph to make objects use the
                texture.</para>
            <para>The last statement is special. It compares the W component of the interpolated
                position against zero, and sets the light intensity to zero if the W component is
                less than or equal to 0. What is the purpose of this?</para>
            <para>It stops this from happening:</para>
            <figure>
                <title>Back Projected Light</title>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="LightBackProjection.png"/>
                    </imageobject>
                </mediaobject>
            </figure>
            <para>The projection math doesn't care what side of the center of projection an object
                is on; it will work either way. And since we do not actually do clipping on our
                texture projection, we need some way to prevent this from happening. We effectively
                need to do some form of clipping.</para>
            <para>Recall that, given the standard projection transform, the W component is the
                negation of the camera-space Z. Since the camera in our camera space is looking down
                the negative Z axis, all positions that are in front of the camera must have a W >
                0. Therefore, if W is less than or equal to 0, then the position is behind the
                camera.</para>
        </section>
        <section>
            <title>Spotlight Tricks</title>
            <para>The size of the flashlight can be changed simply by modifying the field of view in
                the texture projection matrix. Pressing the <keycap>Y</keycap> key will increase the
                FOV, and pressing the <keycap>N</keycap> key will decrease it. An increase to the
                FOV means that the light is projected over a greater area. At a large FOV, we
                effectively have an entire hemisphere of light.</para>
            <para>Another interesting trick we can play is to have multi-colored lights. Press the
                    <keycap>2</keycap>; this will change to a texture that contains spots of various
                different colors.</para>
            <figure>
                <title>Colored Spotlight</title>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="ColoredProjectLights.png"/>
                    </imageobject>
                </mediaobject>
            </figure>
            <para>This kind of complex light emitter would not be possible without using a texture.
                Well it could be possible without textures, but it would require a lot more
                processing power than a few matrix multiplies, a division in the fragment shader,
                and a texture access. Press the <keycap>1</keycap> key to go back to the flashlight
                texture.</para>
            <para>There is one final issue that can and will crop up with projected textures: what
                happens when the texture coordinates are outside of the [0, 1] boundary. With
                previous textures, we used either <literal>GL_CLAMP_TO_EDGE</literal> or
                    <literal>GL_REPEAT</literal> for the S and T texture coordinate wrap modes.
                Repeat is obviously not a good idea here; thus far, our sampler objects have been
                clamping to the texture's edge. That worked fine because our edge texels have all
                been zero. To see what happens when they are not, press the <keycap>3</keycap>
                key.</para>
            <figure>
                <title>Edge Clamped Light</title>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="EdgeClampLight.png"/>
                    </imageobject>
                </mediaobject>
            </figure>
            <para>That rather ruins the effect. Fortunately, OpenGL does provide a way to resolve
                this. It gives us a way to say that texels fetched outside of the [0, 1] range
                should return a particular color. As before, this is set up with the sampler
                object:</para>
            <example>
                <title>Border Clamp Sampler Objects</title>
                <programlisting language="cpp">glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

float color[4] = {0.0f, 0.0f, 0.0f, 1.0f};
glSamplerParameterfv(g_samplers[1], GL_TEXTURE_BORDER_COLOR, color);</programlisting>
            </example>
            <para>The S and T wrap modes are set to <literal>GL_CLAMP_TO_BORDER</literal>. Then the
                border's color is set to zero. To toggle between the edge clamping sampler and the
                border clamping one, press the <keycap>H</keycap> key.</para>
            <figure>
                <title>Border Clamped Light</title>
                <mediaobject>
                    <imageobject>
                        <imagedata fileref="BorderClampLight.png"/>
                    </imageobject>
                </mediaobject>
            </figure>
            <para>That's much better now.</para>
            <sidebar>
                <title>Line Drawing</title>
                <para>You may have noticed that the position and orientation of the light was shown
                    by three lines forming the three directions of an axis. These are a new
                    primitive type: lines.</para>
                <para>Lines have a uniform width no matter how close or far away they are from the
                    camera. Point primitives are defined by one vertex, triangle primitives by 3. So
                    it makes sense that lines are defined by two vertices.</para>
                <para>Just as triangles can come in strips and fans, lines have their own
                    variations. <literal>GL_LINES</literal> are like
                    <literal>GL_TRIANGLES</literal>: a list of independent lines, with each line
                    coming from individual pairs of vertices. <literal>GL_LINE_STRIP</literal>
                    represents a sequence of lines attached head to tail; every vertex has a line to
                    the previous vertex and the next in the list. <literal>GL_LINE_LOOP</literal> is
                    like a strip, except the last and first vertices are also connected by a
                    line.</para>
                <para>This is all encapsulated in the Framework's <function>Mesh</function> class.
                    The axis used here (and later on in the tutorials) is a simple </para>
            </sidebar>
        </section>
    </section>
    <section>
        <?dbhtml filename="Tut17 Pointing Projections.html" ?>
        <title>Pointing Projections</title>
        <para>Spotlights represent a light that has position, direction, and perhaps an FOV and some
            kind of aspect ratio. Through projective texturing, we can make spotlights that have
            arbitrary light intensities, rather than relying on uniform values or shader functions
            to compute light intensity. That is all well and good for spotlights, but there are
            other forms of light that might want varying intensities.</para>
        <para>It doesn't really make sense to vary the light intensity from a directional light.
            After all, the while point of directional lights is that they are infinitely far away,
            so all of the light from them is uniform, in both intensity and direction.</para>
        <para>Varying the intensity of a point light is a more reasonable possibility. We can vary
            the point light's intensity based on one of two possible parameters: the position of the
            light and the direction from the light towards a point in the scene. The latter seems
            far more useful; it represents a light that may cast more or less brightly in different
            directions.</para>
        <para>To do this, what we need is a texture that we can effectively access via a direction.
            While there are ways to convert a 3D vector direction into a 2D texture coordinate, we
            will not use any of them. We will instead use a special texture type creates
            specifically for exactly this sort of thing.</para>
        <para>The common term for this kind of texture is <glossterm>cube map</glossterm>, even
            though it is a texture rather than a mapping of a texture. A cube map texture is a
            texture where every mipmap level is 6 2D images, not merely one. Each of the 6 images
            represents one of the 6 faces of a cube. The texture coordinates for a cube map are a 3D
            vector direction; the texture sampling hardware selects which face to sample from and
            which texel to pick based on the direction.</para>
        <para>It is important to know how the 6 faces of the cube map fit together. OpenGL defines
            the 6 faces based on the X, Y, and Z axes, in the positive and negative directions. This
            diagram explains the orientation of the S and T coordinate axes of each of the faces,
            relative to the direction of the faces in the cube.</para>
        <figure>
            <title>Cube Map Face Orientation</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="CubeMapAxes.svg"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>This information is vital for knowing how to construct the various faces of a cube
            map.</para>
        <para>To use a cube map to specify the light intensity changes for a point light, we simply
            need to do the following. First, we get the direction from the light to the surface
            point of interest. Then we use that direction to sample from the cube map. From there,
            everything is normal.</para>
        <para>The issue is getting the direction from the light to the surface point. Before, a
            point light had no orientation, and this made sense. It cast light uniformly in all
            directions, so even if it had an orientation, you would never be able to tell it was
            there. Now that our light intensity can vary, the point light now needs to be able to
            orient the cube map.</para>
        <para>The easiest way to handle this is a simple transformation trick. The position and
            orientation of the light represents a space. If we transform the position of objects
            into that space, then the direction from the light can easily be obtained. The light's
            position relative to itself is zero, after all. So we need to transform positions from
            some space into the light's space. We will see exactly how this is done
            momentarily.</para>
        <para>Cube map point lights are implemented in the <phrase role="propername">Cube Point
                Light</phrase> project. This puts a fixed point light using a cube map in the middle
            of the scene. The orientation of the light can be changed with the right mouse
            button.</para>
        <figure>
            <title>Cube Point Light</title>
            <mediaobject>
                <imageobject>
                    <imagedata fileref="Cube%20Point%20Light.png"/>
                </imageobject>
            </mediaobject>
        </figure>
        <para>This cube texture has various different light arrangements on the different sides. One
            side even has green text on it. As before, you can use the <keycap>G</keycap> key to
            toggle the non-cube map lights off.</para>
        <para>Pressing the <keycap>2</keycap> key switches to a texture that looks somewhat
            resembles a planetarium show. Pressing <keycap>1</keycap> switches back to the first
            texture.</para>
        <section>
            <title>Cube Texture Loading</title>
            <para>We have seen how 2D textures get loaded over the course of 3 tutorials now, so we
                use GL Image's functions for creating a texture directly from <type>ImageSet</type>.
                Cube map textures require special handling, so let's look at this now.</para>
            <example>
                <title>Cube Texture Loading</title>
                <programlisting language="cpp">std::string filename(Framework::FindFileOrThrow(g_texDefs[tex].filename));
std::auto_ptr&lt;glimg::ImageSet> pImageSet(glimg::loaders::dds::LoadFromFile(filename.c_str()));

glBindTexture(GL_TEXTURE_CUBE_MAP, g_lightTextures[tex]);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 0);

glimg::Dimensions dims = pImageSet->GetDimensions();
GLenum imageFormat = (GLenum)glimg::GetInternalFormat(pImageSet->GetFormat(), 0);

for(int face = 0; face &lt; 6; ++face)
{
    glimg::SingleImage img = pImageSet->GetImage(0, 0, face);
    glCompressedTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face,
        0, imageFormat, dims.width, dims.height, 0,
        img.GetImageByteSize(), img.GetImageData());
}

glBindTexture(GL_TEXTURE_CUBE_MAP, 0);</programlisting>
            </example>
            <para>The DDS format is one of the few image file formats that can actually store all of
                the faces of a cube map. Similarly, the <type>glimg::ImageSet</type> class can store
                cube map faces.</para>
            <para>The first step after loading the cube map faces is to bind the texture to the
                    <literal>GL_TEXTURE_CUBE_MAP</literal> texture binding target. Since this cube
                map is not mipmapped (yes, cube maps can have mipmaps), we set the base and max
                mipmap levels to zero. The call to <function>glimg::GetInternalFormat</function> is
                used to allow GL Image to tell us the OpenGL image format that corresponds to the
                format of the loaded texture data.</para>
            <para>From there, we loop over the 6 faces of the texture, get the
                    <type>SingleImage</type> for that face, and load each face into the OpenGL
                texture. For the moment, pretend the call to
                    <function>glCompressedTexImage2D</function> is a call to
                    <function>glTexImage2D</function>; they do similar things, but the final few
                parameters are different. It may seem odd to call a TexImage2D function when we are
                uploading to a cube map texture. After all, a cube map texture is a completely
                different texture type from 2D textures.</para>
            <para>However, the <quote>TexImage</quote> family of functions specify the
                dimensionality of the image data they are allocating an uploading, not the specific
                texture type. Since a cube map is simply 6 sets of 2D image images, it uses the
                    <quote>TexImage2D</quote> functions to allocate the faces and mipmaps. Which
                face is specified by the first parameter.</para>
            <para>OpenGL has six enumerators of the form
                    <literal>GL_TEXTURE_CUBE_MAP_POSITIVE/NEGATIVE_X/Y/Z</literal>. These
                enumerators are ordered, starting with positive X, so we can loop through all of
                them by adding the numbers [0, 5] to the positive X enumerator. That is what we do
                above. The order of these enumerators is:</para>
            <orderedlist>
                <listitem>
                    <para>POSITIVE_X</para>
                </listitem>
                <listitem>
                    <para>NEGATIVE_X</para>
                </listitem>
                <listitem>
                    <para>POSITIVE_Y</para>
                </listitem>
                <listitem>
                    <para>NEGATIVE_Y</para>
                </listitem>
                <listitem>
                    <para>POSITIVE_Z</para>
                </listitem>
                <listitem>
                    <para>NEGATIVE_Z</para>
                </listitem>
            </orderedlist>
            <para>This mirrors the order that the <type>ImageSet</type> stores them in (and DDS
                files, for that matter).</para>
            <para>The samplers for cube map textures also needs some adjustment:</para>
            <programlisting language="cpp">glSamplerParameteri(g_samplers[0], GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glSamplerParameteri(g_samplers[0], GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glSamplerParameteri(g_samplers[0], GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);</programlisting>
            <para>Cube maps take 3D texture coordinates, so wrap modes must be specified for each of
                the three dimensions of texture coordinates. Since this cube map has no mipmaps, the
                filtering is simply set to <literal>GL_LINEAR</literal>.</para>
        </section>
        <section>
            <title>Texture Compression</title>
            <para>Now we will take a look at why we are using
                    <function>glCompressedTexImage2D</function>. And that requires a discussion of
                image formats and sizes.</para>
            <para>Images take up a lot of memory. And while disk space and even main memory are
                fairly generous these days, GPU memory is always at a premium. Especially if you
                have lots of textures and those textures are quite large. The smaller that texture
                data can be, the more and larger textures you can have in a complex scene.</para>
            <para>The first stop for making this data smaller is to use a smaller image format. For
                example, the standard RGB color format stores each channel as an 8-bit unsigned
                integer. This is usually padded out to make it 4-byte aligned, or a fourth component
                (alpha) is added, making for an RGBA color. That's 32-bits per texel, which is what
                    <literal>GL_RGBA8</literal> specifies. A first pass for making this data smaller
                is to store it with fewer bits. OpenGL provides <literal>GL_RGB565</literal> for
                those who do not need the fourth component, or <literal>GL_RGBA4</literal> for those
                who do. Both of these use 16-bits per texel.</para>
            <para>They both also can produce unpleasant visual artifacts for the textures. Plus,
                OpenGL does not allow such textures to be in the sRGB colorspace; there is no
                    <literal>GL_SRGB565</literal> format.</para>
            <para>For files, this is a solved problem. There are a number of traditional compressed
                image formats: PNG, JPEG, GIF, etc. Some are lossless, meaning that the exact input
                image can be reconstructed. Others are lossy, which means that only an approximation
                of the image can be returned. Either way, these all formats have their benefits and
                downsides. But they are all better, in terms of visual quality and space storage,
                than using 16-bit per texel image formats.</para>
            <para>They also have one other thing in common: they are absolutely terrible for
                    <emphasis>textures</emphasis>, in terms of GPU hardware. These formats are
                designed to be decompressed all at once; you decompress the entire image when you
                want to see it. GPUs don't want to do that. GPUs generally access textures in
                pieces; they access certain sections of a mipmap level, then access other sections,
                etc. GPUs gain their performance by being incredibly parallel: multiple different
                invocations of fragment shaders can be running simultaneously. All of them can be
                accessing different textures and so forth.</para>
            <para>Stopping that processes to decompress a 50KB PNG would pretty much destroy
                rendering performance entirely. These formats may be fine for storing files on disk.
                But they are simply not good formats for being stored compressed in graphics
                memory.</para>
            <para>Instead, there are special formats designed specifically for compressing textures.
                These <glossterm>texture compression</glossterm> formats are designed specifically
                to be friendly for texture accesses. It is easy to find the exact piece of memory
                that stores the data for a specific texel. It takes no more than 64 bits of data to
                decompress any one texel. And so forth. These all combine to make texture
                compression formats useful for saving graphics card memory, while maintaining
                reasonable image quality.</para>
            <para>The regular <function>glTexImage2D</function> function is not capable of directly
                uploading compressed texture data. The pixel transfer information, the last three
                parameters of <function>glTexImage2D</function>, is simply not appropriate for
                dealing with compressed texture data. Therefore, OpenGL uses a different function
                for uploading texture data that is already compressed.</para>
            <programlisting language="cpp">glCompressedTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face,
    0, imageFormat, dims.width, dims.height, 0,
    img.GetImageByteSize(), img.GetImageData());</programlisting>
            <para>Instead of taking OpenGL enums that define what the format of the compressed data
                is, <function>glCompressedTexImage2D</function>'s last two parameters are very
                simple. They specify how big the compressed image data is in bytes and provide a
                pointer to that image data. That is because
                    <function>glCompressedTexImage2D</function> does not allow for format
                conversion; the format of the pixel data passed to it must exactly match what the
                image format says it is. This also means that the
                    <literal>GL_UNPACK_ALIGNMENT</literal> has no effect on compressed texture
                uploads.</para>
        </section>
        <section>
            <title>Cube Texture Space</title>
            <para>Creating the cube map texture was just the first step. The next step is to do the
                necessary transformations. Recall that the goal is to transform the vertex positions
                into the space of the texture, defined relative to world space by a position and
                orientation. However, we ran into a problem previously, because the scene graph only
                provides a model-to-camera transformation matrix.</para>
            <para>This problem still exists, and we will solve it in exactly the same way. We will
                generate a matrix that goes from camera space to our cube map light's space.</para>
            <example>
                <title>View Camera to Light Cube Texture</title>
                <programlisting language="cpp">glutil::MatrixStack lightProjStack;
lightProjStack.ApplyMatrix(glm::inverse(lightView));
lightProjStack.ApplyMatrix(glm::inverse(cameraMatrix));

g_lightProjMatBinder.SetValue(lightProjStack.Top());

glm::vec4 worldLightPos = lightView[3];
glm::vec3 lightPos = glm::vec3(cameraMatrix * worldLightPos);

g_camLightPosBinder.SetValue(lightPos);</programlisting>
            </example>
            <para>This code is rather simpler than the prior time. Again reading bottom up, we
                transform by the inverse of the world-to-camera matrix, then we transform by the
                inverse of the light matrix. The <varname>lightView</varname> matrix is inverted
                because the matrix is ordinarily designed to go from light space to world space. So
                we invert it to get the world-to-light transform. The light's position in world
                space is taken similarly.</para>
            <para>The vertex shader (cubeLight.vert) is about what you would expect:</para>
            <programlisting language="glsl">lightSpacePosition = (cameraToLightProjMatrix * vec4(cameraSpacePosition, 1.0)).xyz;</programlisting>
            <para>The <varname>lightSpacePosition</varname> is output from the vertex shader and
                interpolated. Again we find that this interpolates just fine, so there is no need to
                do this transformation per-fragment.</para>
            <para>The fragment shader code (<filename>cubeLight.frag</filename>) is pretty simple.
                First, we have to define our GLSL samplers:</para>
            <programlisting language="glsl">uniform sampler2D diffuseColorTex;
uniform samplerCube lightCubeTex;</programlisting>
            <para>Because cube maps are a different texture type, they have a different GLSL sampler
                type as well. Attempting to use texture with the one type on a sampler that uses a
                different type results in unpleasantness. It's usually easy enough to keep these
                things straight, but it can be a source of errors or non-rendering.</para>
            <para>The code that fetches from the cube texture is as follows:</para>
            <programlisting language="glsl">PerLight currLight;
currLight.cameraSpaceLightPos = vec4(cameraSpaceProjLightPos, 1.0);
	
vec3 dirFromLight = normalize(lightSpacePosition);
currLight.lightIntensity =
    texture(lightCubeTex, dirFromLight) * 6.0f;</programlisting>
            <para>We simply normalize the light-space position, since the cube map's space has the
                light position at the origin. We then use the <function>texture</function> to access
                the cubemap, the same one we used for 2D textures. This is possible because GLSL
                overloads the <function>texture</function> based on the type of sampler. So when
                    <function>texture</function> is passed a <type>samplerCube</type>, it expects a
                    <type>vec3</type> texture coordinate.</para>
        </section>
    </section>
    <section>
        <?dbhtml filename="Tut17 In Review.html" ?>
        <title>In Review</title>
        <para>In this tutorial, you have learned the following:</para>
        <itemizedlist>
            <listitem>
                <para>Vertex positions can be further manipulated after a perspective projection.
                    Thus the perspective transform is not special. The shape of objects in
                    post-projective space can be unusual and unexpected.</para>
            </listitem>
            <listitem>
                <para>Textures can be projected onto meshes. This is done by transforming those
                    meshes into the space of the texture, which is equivalent to transforming the
                    texture into the space of the meshes. The transform is governed by its own
                    camera matrix, as well as a projection matrix and a post-projective transform
                    that transforms it into the [0, 1] range of the texture.</para>
            </listitem>
            <listitem>
                <para>Cube maps are textures that have 6 face images for every mipmap level. The 6
                    faces are arranged in a cube. Texture coordinates are effectively directions of
                    a vector centered within the cube. Thus a cube map can provide a varying value
                    based on a direction in space.</para>
            </listitem>
        </itemizedlist>
        <section>
            <title>Further Study</title>
            <para>Try doing these things with the given programs.</para>
            <itemizedlist>
                <listitem>
                    <para>In the spotlight project, change the projection texture coordinate from a
                        full 4D coordinate to a 2D. Do this by performing the divide-by-W step
                        directly in the vertex shader, and simply pass the ST coordinates to the
                        fragment shader. Just use <function>texture</function> instead of
                            <function>textureProj</function> in the fragment shader. See how that
                        affects things. Also, try doing the perspective divide in the fragment
                        shader and see how this differs from doing it in the vertex shader.</para>
                </listitem>
                <listitem>
                    <para>In the spotlight project, change the interpolation style from
                            <literal>smooth</literal> to <literal>noperspective</literal>. See how
                        non-perspective-correct interpolation changes the projection.</para>
                </listitem>
                <listitem>
                    <para>Instead of using a projective texture, build a lighting system for spot
                        lights entirely within the shader. It should have a maximum angle; the
                        larger the angle, the wider the spotlight. It should also have an inner
                        angle that is smaller than the maximum angle. This the the point where the
                        light starts falling off. At the maximum angle, the light intensity goes to
                        zero; at the minimum angle, the light intensity is full. The key here is
                        remembering that the dot product between the spotlight's direction and the
                        direction from the surface to the light is the cosine of the angle between
                        the two vectors. The <function>acos</function> function can be used to
                        compute the angle (in radians) from the cosine.</para>
                </listitem>
            </itemizedlist>
        </section>
        <section>
            <title>Further Research</title>
            <para>Cube maps are fairly old technology. The version used in GPUs today derive from
                the Renderman standard and earlier works. However, before hardware that allowed
                cubemaps became widely available, there were alternative techniques that were used
                to achieve similar effects.</para>
            <para>The basic idea behind all of these is to transform a 3D vector direction into a 2D
                texture coordinate. Note that converting a 3D direction into a 2D plane is a problem
                that was encountered long before computer graphics. It is effectively the global
                mapping problem: how you create a 2D map of a 3D spherical surface. All of these
                techniques introduce some distance distortion into the 2D map. Some distortion is
                more acceptable in certain circumstances than others.</para>
            <para>One of the more common pre-cube map techniques was sphere mapping. This required a
                very heavily distorted 2D texture, so the results left something to be desired. But
                the 3D-to-2D computations were simple enough to be encoded into early graphics
                hardware, or performed quickly on the CPU, so it was acceptable as a stop-gap. Other
                techniques, such as dual paraboloid mapping, were also used. The latter used a pair
                of textures, so they ate up more resources. But they required less heavy distortions
                of the texture, so in some cases, they were a better tradeoff.</para>
        </section>
        <section>
            <title>OpenGL Functions of Note</title>
            <glosslist>
                <glossentry>
                    <glossterm>glCompressedTexImage2D</glossterm>
                    <glossdef>
                        <para>Allocates a 2D image of the given size and mipmap for the current
                            texture, using the given compressed image format, and uploads compressed
                            pixel data. The pixel data must exactly match the format of the data
                            defined by the compressed image format.</para>
                    </glossdef>
                </glossentry>
            </glosslist>
        </section>
        <section>
            <title>GLSL Functions of Note</title>
            <funcsynopsis>
                <funcprototype>
                    <funcdef>vec4 <function>textureProj</function></funcdef>
                    <paramdef>sampler <parameter>texSampler</parameter></paramdef>
                    <paramdef>vec texCoord</paramdef>
                </funcprototype>
            </funcsynopsis>
            <para>Accesses the texture associated with <parameter>texSampler</parameter>, using
                post-projective texture coordinates specified by <parameter>texCoord</parameter>.
                The <type>sampler</type> type can be many of the sampler types, but not
                    <type>samplerCube</type>, among a few others. The texture coordinates are in
                homogeneous space, so they have one more components than the number of dimensions of
                the texture. Thus, the number of components in <parameter>texCoord</parameter> for a
                sampler of type <type>sampler1D</type> is <type>vec2</type>. For
                    <type>sampler2D</type>, it is <type>vec3</type>.</para>
        </section>
        
    </section>
    <section>
        <?dbhtml filename="Tut17 Glossary.html" ?>
        <title>Glossary</title>
        <glosslist>
            <glossentry>
                <glossterm>scene graph</glossterm>
                <glossdef>
                    <para>The general term for a data structure that holds the objects within a
                        particular scene. Objects in a scene graph often have parent-child
                        relationships for their transforms, as well as references to the shaders,
                        meshes, and textures needed to render them.</para>
                </glossdef>
            </glossentry>
            <glossentry>
                <glossterm>projective texturing</glossterm>
                <glossdef>
                    <para>A texture mapping technique that generates texture coordinates to make a
                        2D texture appear to have been projected onto a surface. This is done by
                        transforming the vertex positions of objects into the scene through a
                        projective series of transformations into the space of the texture
                        itself.</para>
                </glossdef>
            </glossentry>
            <glossentry>
                <glossterm>spotlight source</glossterm>
                <glossdef>
                    <para>A light source that emits from a position in the world in a generally
                        conical shape along a particular direction. Some spot lights have a full
                        orientation, while others only need a direction. Spotlights can be
                        implemented in shader code, or more generally via projective texturing
                        techniques.</para>
                </glossdef>
            </glossentry>
            <glossentry>
                <glossterm>cube map texture</glossterm>
                <glossdef>
                    <para>A type of texture that uses 6 2D images to represent faces of a cube. It
                        takes 3D texture coordinates that represent a direction from the center of a
                        cube onto one of these faces. Thus, each texel on each of the 6 faces comes
                        from a unique direction. Cube maps allow data based on directions to vary
                        based on stored texture data.</para>
                </glossdef>
            </glossentry>
            <glossentry>
                <glossterm>texture compression</glossterm>
                <glossdef>
                    <para>A set of image formats that stores texel data in a small format that is
                        optimized for texture access. These formats are not as small as specialized
                        image file formats, but they are designed for fast GPU texture fetch access,
                        while still saving significant graphics memory.</para>
                </glossdef>
            </glossentry>
        </glosslist>
        
    </section>
</chapter>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.