# Commits

committed c6e7e59

Some copyediting and added some Tut10 text.

• Participants
• Parent commits a532a85

# Documents/Illumination/Tutorial 08.xml

`                 be interpolated across the surface of the triangle. This process is called`
`                     <glossterm>Gouraud shading.</glossterm></para>`
`             <para>Gouraud shading is a pretty decent approximation, when using the diffuse lighting`
`-                model. It usually looks OK, and was commonly used for a good decade or so.`
`-                Interpolation of vertex outputs is a very fast process, and not having to compute`
`-                lighting at every fragment generated from the triangle raises the performance`
`-                substantially.</para>`
`+                model. It usually looks OK so long as we remain using that lighting model, and was`
`+                commonly used for a good decade or so. Interpolation of vertex outputs is a very`
`+                fast process, and not having to compute lighting at every fragment generated from`
`+                the triangle raises the performance substantially.</para>`
`             <para>That being said, modern games have essentially abandoned this technique. Part of`
`                 that is because the per-fragment computation isn't as slow and limited as it used to`
`                 be. And part of it is simply that games tend to not use just diffuse lighting`
`             <title>Vector Dot Product</title>`
`             <para>We glossed over an important point in looking at the vertex shader. Namely, how`
`                 the cosine of the angle of incidence is computed.</para>`
`-            <para>Given two vectors, one can certainly compute the angle of incidence, then take the`
`-                cosine of it. But both of these computations are quite expensive. Instead, we elect`
`-                to use a vector math trick: the <glossterm>dot product.</glossterm></para>`
`+            <para>Given two vectors, one could certainly compute the angle of incidence, then take`
`+                the cosine of it. But both computing that angle and taking its cosine are quite`
`+                expensive. Instead, we elect to use a vector math trick: the <glossterm>vector dot`
`+                    product.</glossterm></para>`
`             <para>Geometrically, the vector dot product of two direction vectors represents the`
`                 length of the projection of one vector onto the other.</para>`
`             <!--TODO: Show two vectors, and the projection onto each other.-->`
`             inverse is its transpose, and doing a transpose twice on a matrix doesn't change its`
`             value, the inverse-transpose of a rotation matrix is a no-op.</para>`
`         <para>Adding to that, since the values in pure-scale matrices are along the diagonal, a`
`-            transpose operation on scale matrices does nothing. With thee two facts in hand, we can`
`+            transpose operation on scale matrices does nothing. With these two facts in hand, we can`
`             re-express the matrix we want to compute as:</para>`
`         <!--TODO: Show (R1^T)^-1 (S^T)^-1 (R2^T)^-1-->`
`         <para>Using matrix algebra, we can factor the transposes out, but it requires reversing the`
`             actually somewhat rare to use this kind of scale factor. We do it in these tutorials, so`
`             that it is easier to build models from simple geometric components. But when you have an`
`             actual modeller creating objects for a specific purpose, non-uniform scales generally`
`-            are not used. It's better to just get the modeller to adjust the model as needed.</para>`
`-        <para>Uniform scales are commonly used. So you still need to normalize the normal after`
`-            transforming it with the model-to-camera matrix.</para>`
`+            are not used. At least, not in the output mesh. It's better to just get the modeller to`
`+            adjust the model as needed in their modelling application.</para>`
`+        <para>Uniform scales are more commonly used. So you still need to normalize the normal after`
`+            transforming it with the model-to-camera matrix, even if you aren't using the`
`+            inverse-transpose.</para>`
`     </section>`
`     <section>`
`         <?dbhtml filename="Tut08 Global Illumination.html" ?>`
`             example, take this image:</para>`
`         <!--TODO: Show an image of the cylinder with half lit and half unlit-->`
`         <para>The unlit portions of the cylinder are completely, 100% black. This almost never`
`-            happens in real life. The reason for this is somewhat complicated.</para>`
`+            happens in real life, even for objects we perceive as being <quote>black</quote> in`
`+            color. The reason for this is somewhat complicated.</para>`
`         <para>Consider a scene of the outdoors. In normal daylight, there is exactly one light`
`             source: the sun. Objects that are in direct sunlight appear to be bright, and objects`
`             that have some object between them and the sun are in shadow.</para>`
`         <para>But think about what those shadows look like. They're not 100% black. They're`
`             certainly darker than the surrounding area, but they still have some color. And`
`-            remember: we only see anything because our eyes detect light. This means, in order to`
`-            see an object in the shadow of a light source, that object must either be emitting light`
`-            directly or reflecting light that came from somewhere else. Grass is not known for`
`-            emitting light, so where does the light come from?</para>`
`+            remember: we only see anything because our eyes detect light. In order to see an object`
`+            in the shadow of a light source, that object must either be emitting light directly or`
`+            reflecting light that came from somewhere else. Grass is not known for its`
`+            light-emitting qualities, so where does the light come from?</para>`
`         <para>Think about it. We see because an object reflects light into our eyes. But our eyes`
`             are not special; the object does not reflect light <emphasis>only</emphasis> into our`
`-            eyes. It reflects light in all directions. Not necessarily at the same intensity, but`
`-            objects that reflect light tend to do so in all directions to some degree. What happens`
`-            when that light hits another surface?</para>`
`+            eyes. It reflects light in all directions. Not necessarily at the same intensity in each`
`+            direction, but objects that reflect light tend to do so in all directions to some`
`+            degree. What happens when that light hits another surface?</para>`
`         <para>The same thing that happens when light hits any surface: some of it is absorbed, and`
`             some is reflected in some way.</para>`
`         <para>The light being cast in shadows from the sun comes from many places. Part of it is an`
`             what we have been doing up until this point.</para>`
`         <para>As you might imagine, modelling global illumination is hard. <emphasis>Very</emphasis>`
`             hard. It is typically a subtle effect, but in many scenes, particularly outdoor scenes,`
`-            it is almost a necessity to some at least basic global illumination modelling in order`
`-            to achieve a decent degree of photorealism. Incidentally, this is a good part of the`
`-            reason why most games tend to avoid outdoor scenes or light outdoor scenes as though the`
`-            sky were cloudy or overcast. This neatly avoids needing to do complex global`
`+            it is almost a necessity to provide at least basic global illumination modelling in`
`+            order to achieve a decent degree of photorealism. Incidentally, this is a good part of`
`+            the reason why most games tend to avoid outdoor scenes or light outdoor scenes as though`
`+            the sky were cloudy or overcast. This neatly avoids needing to do complex global`
`             illumination modelling by damping down the brightness of the sun to levels when`
`             interreflection would be difficult to notice.</para>`
`         <para>Having this completely black area in our rendering looks incredibly fake. Since doing`
`             technique: <glossterm>ambient lighting.</glossterm></para>`
`         <para>The ambient lighting <quote>model</quote><footnote>`
`                 <para>I put model in quotations because ambient lighting is so divorced from`
`-                    anything in reality that it doesn't really deserve to be called a model. This`
`-                    doesn't mean that it isn't <emphasis>useful</emphasis>, however.</para>`
`+                    anything in reality that it doesn't really deserve to be called a model. Just`
`+                    because it does not actually model global illumination in any real way doesn't`
`+                    mean that it isn't <emphasis>useful</emphasis>, however.</para>`
`             </footnote> is quite simple. It assumes that, on every object in the scene, there is a`
`             light of a certain intensity that emanates from everywhere. It comes from all directions`
`             equally, so there is no angle of incidence in our diffuse calculation. It is simply the`
`     <section>`
`         <?dbhtml filename="Tut08 Intensity of Light.html" ?>`
`         <title>Intensity of Light</title>`
`-        <para>There are many, many things wrong with the primitive lighting model introduced here.`
`-            But one of the most important is the treatment of the lighting intensity.</para>`
`+        <para>There are many, many things wrong with the rather primitive lighting models introduced`
`+            thus far. But one of the most important is the treatment of the lighting`
`+            intensity.</para>`
`         <para>Thus far, we have used light intensity like a color. We clamp it to the range [0, 1].`
`             We even make sure that combined intensities from different lighting models always are`
`             within that range.</para>`
`             number on the range [0, 1] can only ever make a number smaller (or the same).</para>`
`         <para>This is of course not realistic. In reality, there is no such thing as a`
`                 <quote>maximum</quote> illumination or brightness. There is no such thing as a (1,`
`-            1, 1) light intensity. The actual range of light intensity per wavelength is [0, ∞).`
`-            This also means that the range of intensity for reflected light is [0, ∞); after all, if`
`-            you shine a really bright light on a surface, it will reflect a lot of it. A surface`
`-            that looks dark blue under dim light can appear light blue under very bright`
`+            1, 1) light intensity. The actual range of light intensity per wavelength is on the`
`+            range[0, ∞). This also means that the range of intensity for reflected light is [0, ∞);`
`+            after all, if you shine a really bright light on a surface, it will reflect a lot of it.`
`+            A surface that looks dark blue under dim light can appear light blue under very bright`
`             light.</para>`
`         <para>Of course in the real world, things tend to catch on fire if you shine <emphasis>too`
`                 much</emphasis> light at them, but that's not something we need to model.</para>`

# Documents/Illumination/Tutorial 09.xml

`         <para>Well, consider what we are doing. We are computing the lighting at every triangle's`
`                 <emphasis>vertex</emphasis>, and then interpolating the results across the surface`
`             of the triangle. The ground plane is made up of precisely four vertices: the four`
`-            corners. And those are all very far from the light position. Since none of the vertices`
`-            are close to the light, none of the colors that are interpolated across the surface are`
`-            bright.</para>`
`+            corners. And those are all very far from the light position and have a very angle of`
`+            incidence. Since none of them have a small angle of incidence, none of the colors that`
`+            are interpolated across the surface are bright.</para>`
`         <para>You can see this is evident by putting the light position next to the cylinder. If the`
`             light is at the top or bottom of the cylinder, then the area near the light will be`
`             bright. But if you move the light to the middle of the cylinder, far the top or bottom`
`         <para>This is not the only problem with doing per-vertex lighting. For example, run the`
`             tutorial again and don't move the light. Just watch how the light behaves on the`
`             cylinder's surface as it animates around. Unlike with directional lighting, you can very`
`-            easily see the triangles on the cylinder's surface. This has issues for similar reasons,`
`-            but it also introduces a new problem: interpolation artifacts.</para>`
`-        <para>If you move the light source farther away, you can see that the triangles smooth out.`
`-            But this is simply because, if the light source is far enough away, the results are`
`-            indistinguishable from a directional light. Each vertex's direction to the light is`
`-            almost the same as each other vertex's direction to the light.</para>`
`+            easily see the triangles on the cylinder's surface. Though the per-vertex computations`
`+            aren't helping matters, the main problem here has to do with interpolating the`
`+            values.</para>`
`+        <para>If you move the light source farther away, you can see that the triangles smooth out`
`+            and become indistinct from one another. But this is simply because, if the light source`
`+            is far enough away, the results are indistinguishable from a directional light. Each`
`+            vertex's direction to the light is almost the same as each other vertex's direction to`
`+            the light.</para>`
`         <para>Per-vertex lighting was reasonable when dealing with directional lights. But it simply`
`             is not a good idea for point lighting. The question arises: why was per-vertex lighting`
`             good with directional lights to begin with?</para>`
`             is called per-fragment lighting or just <glossterm>fragment lighting.</glossterm></para>`
`         <para>There is a problem that needs to be dealt with first. Normals do not interpolate well.`
`             Or rather, wildly different normals do not interpolate well. And light directions can be`
`-            very different.</para>`
`+            very different if the light source is close to the triangle relative to that triangle's`
`+            size.</para>`
`         <para>Consider the large plane we have. The direction toward the light will be very`
`             different at each vertex, so long as our light remains in relatively close proximity to`
`             the plane.</para>`
`             triangle. Which means that vertices near the diagonal will be basically doing a linear`
`             interpolation between the two values on either end of that diagonal. This is not the`
`             same thing as doing interpolation between all 4 values.</para>`
`-        <!--TODO: Show bilinear interpolation vs. triangular interpolation.-->`
`+        <!--TODO: Show bilinear interpolation vs. barycentric interpolation.-->`
`         <para>In our case, this means that for points along the main diagonal, the light direction`
`             will only be composed of the direction values from the two vertices on that diagonal.`
`             This is not good. This wouldn't be much of a problem if the light direction did not`
`             change much along the surface, but that is not the case here.</para>`
`         <para>Since we cannot interpolate the light direction very well, we need to interpolate`
`-            something else. Something that does exhibit the characteristics we need.</para>`
`+            something else. Something that does exhibit the characteristics we need when`
`+            interpolated.</para>`
`         <para>Positions interpolate linearly quite well. So instead of interpolating the light`
`             direction, we interpolate the components of the light direction. Namely, the two`
`             positions. The light position is a constant, so we only need to interpolate the vertex`
`                 </keycombo> key to move the light really close to the cylinder, but without putting`
`                 the light inside the cylinder. You should see something like this:</para>`
`             <!--TODO: Picture of the cylinder with a close light.-->`
`-            <para>This looks like the same problem we had before. Wasn't doing lighting at the`
`-                fragment level supposed to fix this?</para>`
`+            <para>This looks like the same interpolation problem we had before. Wasn't doing`
`+                lighting at the fragment level supposed to fix this?</para>`
`             <para>Actually, this is a completely different problem. And it is one that is`
`                 essentially impossible to solve. Well, not without changing our geometry.</para>`
`             <para>The source of the problem is this: the light finally caught us in our lie.`
`                 solution is to add more vertices to the approximation of the cylinder. It should`
`                 also be noted that adding more triangles would also make per-vertex lighting look`
`                 more correct. Thus making the whole exercise in using per-fragment lighting somewhat`
`-                pointless; if the mesh is fine enough so that each vertex effectively becomes a`
`-                fragment, then there is no difference between them.</para>`
`+                pointless; if the mesh is finely-divided enough so that each vertex effectively`
`+                becomes a single fragment, then there is no difference between the two`
`+                techniques.</para>`
`             <para>For our simple case, adding triangles is easy, since a cylinder is a`
`                 mathematically defined object. For a complicated case like a human being, this is`
`                 usually rather more difficult. It also takes up more performance, since there are`
`                 <para>The use of this reverse-transformation technique here should not be taken as a`
`                     suggestion to use it in all, or even most cases like this. In all likelihood, it`
`                     will be much slower than just passing the camera space position to the fragment`
`-                    shader. It is here primarily for demonstration purposes; it is useful for other`
`-                    techniques that we will see in the future.</para>`
`+                    shader. It is here primarily for demonstration purposes, though it is useful for`
`+                    other techniques that we will see in the future.</para>`
`             </note>`
`             <para>The sequence of transformations that take a position from camera space to window`
`                 space is as follows:</para>`
`                 outputs, or inputs and outputs.</para>`
`             <para>Parameters designated with <literal>in</literal> are input parameters. Functions`
`                 can change these values, but they will have no effect on the variable or expression`
`-                used in the function call. So any changes . This is much like the default in C/C++,`
`-                where parameter changes are local. Naturally, this is the default with GLSL`
`-                parameters if you do not specify a qualifier.</para>`
`+                used in the function call. This is much like the default in C/C++, where parameter`
`+                changes are local. Naturally, this is the default with GLSL parameters if you do not`
`+                specify a qualifier.</para>`
`             <para>Parameters designated with <literal>out</literal> can be written to, and its value`
`                 will be returned to the calling function. These are similar to non-const reference`
`                 parameter types in C++. And just as with reference parameters, the caller of a`
`                     <emphasis>not</emphasis> initialized from the calling function. This means that`
`                 the initial value is uninitialized and therefore undefined (ie: it could be`
`                 anything). Because of this, you can pass shader stage outputs as`
`-                    <literal>out</literal> parameters. Shader stage output variables can be written`
`-                to, but <emphasis>never</emphasis> read from.</para>`
`+                    <literal>out</literal> parameters. Recall that shader stage output variables can`
`+                be written to, but <emphasis>never</emphasis> read from.</para>`
`             <para>Parameters designated as <literal>inout</literal> will have its value initialized`
`                 by the caller and have the final value returned to the caller. These are exactly`
`                 like non-const reference parameters in C++. The main difference is that the value is`
`                 initialized with the one that the user passed in, which forbids the passing of`
`                 shader stage outputs as <literal>inout</literal> parameters.</para>`
`-            <para>This function is semi-complex, as an optimization. Previously, our functions`
`-                simply normalized the difference between the vertex position and the light position.`
`-                In computing the attenuation, we need the distance between the two. And the process`
`-                of normalization computes the distance. So instead of calling the GLSL function to`
`-                normalize the direction, we do it ourselves, so that the distance is not computed`
`-                twice (once in the GLSL function and once for us).</para>`
`+            <para>This particular function is semi-complex, as an optimization. Previously, our`
`+                functions simply normalized the difference between the vertex position and the light`
`+                position. In computing the attenuation, we need the distance between the two. And`
`+                the process of normalization computes the distance. So instead of calling the GLSL`
`+                function to normalize the direction, we do it ourselves, so that the distance is not`
`+                computed twice (once in the GLSL function and once for us).</para>`
`             <para>The second line performs a dot product with the same vector. Remember that the dot`
`                 product between two vectors is the cosine of the angle between them, multiplied by`
`                 each of the lengths of the vectors. Well, the angle between a vector and itself is`
`                     0, the light has full intensity. When the distance is beyond a given distance,`
`                     the maximum light range (which varies per-light), the intensity is 1.</para>`
`                 <para>Note that <quote>reasonably good</quote> depends on your needs. The closer you`
`-                    get in other way to providing physically accurate lighting, the closer you get`
`+                    get in other ways to providing physically accurate lighting, the closer you get`
`                     to photorealism, the less you can rely on less accurate phenomena. It does no`
`                     good to implement a complicated sub-surface scattering lighting model that`
`                     includes Fresnel factors and so forth, while simultaneously using a simple`

# Documents/Illumination/Tutorial 10.xml

`+<?xml version="1.0" encoding="UTF-8"?>`
`+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>`
`+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>`
`+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"`
`+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">`
`+    <?dbhtml filename="Tutorial 09.html" ?>`
`+    <title>Shinies</title>`
`+    <para>Thus far, our lighting models </para>`
`+    <section>`
`+        <?dbhtml filename="Tut10 In Review.html" ?>`
`+        <title>In Review</title>`
`+        <para>In this tutorial, you have learned the following:</para>`
`+        <itemizedlist>`
`+            <listitem>`
`+                <para/>`
`+            </listitem>`
`+            <listitem>`
`+                <para/>`
`+            </listitem>`
`+        </itemizedlist>`
`+        <section>`
`+            <title>Further Study</title>`
`+            <para>Try doing these things with the given programs.</para>`
`+            <itemizedlist>`
`+                <listitem>`
`+                    <para/>`
`+                </listitem>`
`+            </itemizedlist>`
`+        </section>`
`+        <section>`
`+            <title>GLSL Functions of Note</title>`
`+            <para/>`
`+        </section>`
`+    </section>`
`+    <section>`
`+        <?dbhtml filename="Tut10 Glossary.html" ?>`
`+        <title>Glossary</title>`
`+        <glosslist>`
`+            <glossentry>`
`+                <glossterm/>`
`+                <glossdef>`
`+                    <para/>`
`+                </glossdef>`
`+            </glossentry>`
`+        </glosslist>`
`+    </section>`
`+</chapter>`

# Documents/Tutorial Documents.xpr

`         <folder name="Illumination">`
`             <file name="Illumination/Tutorial%2008.xml"/>`
`             <file name="Illumination/Tutorial%2009.xml"/>`
`+            <file name="Illumination/Tutorial%2010.xml"/>`
`         </folder>`
`         <folder name="Positioning">`
`             <file name="Positioning/Tutorial%2003.xml"/>`

# Documents/Tutorials.xml

`         <partintro>`
`             <para>One of the most important aspects of rendering is lighting. Thus far, all of our`
`                 objects have had a color that is entirely part of the mesh data, pulled from a`
`-                uniform variable, or computed in an arbitrary way. This is not how color works in`
`-                the real world.</para>`
`+                uniform variable, or computed in an arbitrary way. This makes all of our objects`
`+                look very flat and unrealistic.</para>`
`             <para>Properly modeling the interaction between light and a surface is vital in creating`
`-                a convincing world. The tutorials in this section will introduce some simple`
`-                light/surface models and explain how to implement them.</para>`
`+                a convincing world. Lighting defines how we see and understands shapes to a large`
`+                degree. This is the reason why the objects we have used thus far look fairly flat. A`
`+                curved surface appears curved to us because of how the light plays over the surface.`
`+                The same goes for a flat surface. Without this visual hinting, surfaces appear flat`
`+                even when they are modeled with many triangles and yield a seemingly-curved`
`+                polygonal mesh.</para>`
`+            <para>A proper lighting model makes objects appear real. A poor or inconsistent lighting`
`+                model shows the virtual world to be the forgery that it is. The tutorials in this`
`+                section will introduce some light/surface models and explain how to implement`
`+                them.</para>`
`         </partintro>`
`         <xi:include href="Illumination/tutorial 08.xml"/>`
`         <xi:include href="Illumination/tutorial 09.xml"/>`
`+        <xi:include href="Illumination/tutorial 10.xml"/>`
`     </part>`
`     <part>`
`         <?dbhtml filename="Texturing.html" dir="Texturing" ?>`