Commits

Anonymous committed 25f4087

Split parts into individual files.
Copyediting

  • Participants
  • Parent commits 1756ad0

Comments (0)

Files changed (11)

File Documents/Advanced Lighting.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<part>
+    <?dbhtml filename="Advanced Lighting.html" dir="Adv Lighting" ?>
+    <title>Advanced Lighting</title>
+    <partintro>
+        <para>Simple diffuse lighting and directional shadows are useful, but better, more
+            effective lighting models and patterns exist. These tutorials will explore those,
+            from Phong lighting to reflections to HDR and blooming.</para>
+    </partintro>
+</part>

File Documents/Basics.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<part>
+    <?dbhtml filename="Basics.html" dir="Basics"?>
+    <info>
+        <title>The Basics</title>
+    </info>
+    <partintro>
+        <para>These tutorials involve the most basic operations for OpenGL. They deal with the
+            core of the OpenGL pipeline. These provide core information about how OpenGL works,
+            how its pipeline works, what the basic flow of information is within OpenGL.</para>
+    </partintro>
+    <xi:include href="Basics/tutorial 00.xml"/>
+    <xi:include href="Basics/tutorial 01.xml"/>
+    <xi:include href="Basics/tutorial 02.xml"/>
+</part>

File Documents/Framebuffer.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<part>
+    <?dbhtml filename="Framebuffer.html" dir="Framebuffer" ?>
+    <title>Framebuffer</title>
+    <partintro>
+        <para>Render targets and framebuffer blending are key components to many advanced
+            effects. These tutorials will cover many per-framebuffer operations, from blending
+            to render targets.</para>
+    </partintro>
+</part>

File Documents/Illumination.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<part>
+    <?dbhtml filename="Illumination.html" dir="Illumination" ?>
+    <info>
+        <title>Illumination</title>
+    </info>
+    <partintro>
+        <para>One of the most important aspects of rendering is lighting. Thus far, all of our
+            objects have had a color that is entirely part of the mesh data, pulled from a
+            uniform variable, or computed in an arbitrary way. This makes all of our objects
+            look very flat and unrealistic.</para>
+        <para>Properly modeling the interaction between light and a surface is vital in creating
+            a convincing world. Lighting defines how we see and understand shapes to a large
+            degree. This is the reason why the objects we have used thus far look fairly flat. A
+            curved surface appears curved to us because of how the light plays over the surface.
+            The same goes for a flat surface. Without this visual hinting, surfaces appear flat
+            even when they are modeled with many triangles and yield a seemingly-curved
+            polygonal mesh.</para>
+        <para>A proper lighting model makes objects appear real. A poor or inconsistent lighting
+            model shows the virtual world to be the forgery that it is. The tutorials in this
+            section will introduce some light/surface models and explain how to implement
+            them.</para>
+    </partintro>
+    <xi:include href="Illumination/tutorial 09.xml"/>
+    <xi:include href="Illumination/tutorial 10.xml"/>
+    <xi:include href="Illumination/tutorial 11.xml"/>
+    <xi:include href="Illumination/tutorial 12.xml"/>
+    <xi:include href="Illumination/tutorial 13.xml"/>
+</part>

File Documents/Positioning.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<part>
+    <?dbhtml filename="Positioning.html" dir="Positioning" ?>
+    <info>
+        <title>Positioning</title>
+    </info>
+    <partintro>
+        <para>These tutorials give the reader information about how objects are positioned in 3D
+            graphics and OpenGL. These deal with transforming the position of objects, as well
+            as doing what is necessary to make those objects appear as though they are in a
+            3-dimensional space.</para>
+    </partintro>
+    <xi:include href="Positioning/tutorial 03.xml"/>
+    <xi:include href="Positioning/tutorial 04.xml"/>
+    <xi:include href="Positioning/tutorial 05.xml"/>
+    <xi:include href="Positioning/tutorial 06.xml"/>
+    <xi:include href="Positioning/tutorial 07.xml"/>
+    <xi:include href="Positioning/tutorial 08.xml"/>
+</part>

File Documents/Texturing/Tutorial 14.xml

     <section>
         <?dbhtml filename="Tut14 The First Texture.html" ?>
         <title>The First Texture</title>
-        <para>A <glossterm>texture</glossterm> is an object that contains one or more arrays of some
-            dimensionality. The storage for a texture is owned by OpenGL and the GPU, much like they
-            own the storage for buffer objects. Textures can be accessed in a shader, which fetches
-            data from the texture at a specific location within the texture's arrays. The process of
-            fetching data from a texture is called <glossterm>sampling.</glossterm></para>
+        <para>A <glossterm>texture</glossterm> is an object that contains one or more arrays of
+            data, with all of the arrays having some dimensionality. The storage for a texture is
+            owned by OpenGL and the GPU, much like they own the storage for buffer objects. Textures
+            can be accessed in a shader, which fetches data from the texture at a specific location
+            within the texture's arrays.</para>
         <para>The arrays within a texture are called <glossterm>images</glossterm>; this is a legacy
             term, but it is what they are called. Textures have a <glossterm>texture
                 type</glossterm>; this defines characteristics of the texture as a whole, like the
                 The first parameter specifies the texture's type. Note that once you have bound a
                 texture to the context with a certain type, it must <emphasis>always</emphasis> be
                 bound with that same type. <literal>GL_TEXTURE_1D</literal> means that the texture
-                contains one-dimensional arrays of data.</para>
-            <para>The next function, <function>glTexImage1D</function> is how we pass data to the
-                texture. It has a lot of parameters. The first specifies the type of the currently
-                bound texture. As with buffer objects, multiple textures can be bound to different
-                texture type locations. So you could have a texture bound to
-                    <literal>GL_TEXTURE_1D</literal> and another boudn to
+                contains one-dimensional images.</para>
+            <para>The next function, <function>glTexImage1D</function> is how we allocate storage
+                for the texture and pass data to the texture. It is similar to
+                    <function>glBufferData</function>, though it has many more parameters. The first
+                specifies the type of the currently bound texture. As with buffer objects, multiple
+                textures can be bound to different texture type locations. So you could have a
+                texture bound to <literal>GL_TEXTURE_1D</literal> and another boudn to
                     <literal>GL_TEXTURE_2D</literal>. But it's really bad form to try to exploit
                 this. It is best to just have one target bound at a time.</para>
             <para>The second parameter is something we will talk about in the next tutorial. The
                 fact that a texture object can contain multiple images, the major difference is the
                 arrangement of data as it is stored by the GPU.</para>
             <para>Buffer objects are linear arrays of memory. The data stored by OpenGL must be
-                binary-identical to how the user specifies the data with
+                binary-identical to the data that the user specifies with
                     <function>glBuffer(Sub)Data</function> calls. The format of the data stored in a
-                buffer object is defined external to the buffer object itself. Buffer objects used
+                buffer object is defined externally to the buffer object itself. Buffer objects used
                 for vertex attributes have their formats defined by
                     <function>glVertexAttribPointer</function>. The format for buffer objects that
                 store uniform data is defined by the arrangement of types in a GLSL uniform
                 block.</para>
             <para>There are other ways that use buffer objects that allow OpenGL calls to fill them
-                with data. But even in these cases, the binary format of the data to be stored is
-                very strictly controlled by the user. In all cases, it is the
-                    <emphasis>user's</emphasis> responsibility to make sure that the data stored
-                there uses the format that OpenGL was told to expect. Even when OpenGL itself is
-                generating the data.</para>
+                with data. But in all cases, the binary format of the data to be stored is very
+                strictly controlled by the user. It is the <emphasis>user's</emphasis>
+                responsibility to make sure that the data stored there uses the format that OpenGL
+                was told to expect. Even when OpenGL itself is generating the data being stored in
+                it.</para>
             <para>Textures do not work this way. The format of an image stored in a texture is
                 controlled by OpenGL itself. The user tells it what format to use, but the specific
                 arrangements of bytes is up to OpenGL. This allows different hardware to store
                 components. Because this parameter does not end in <quote>_INTEGER</quote>, OpenGL
                 knows that the data we are uploading is either a floating-point value or a
                 normalized integer value (which converts to a float when accessed by the
-                user).</para>
+                shader).</para>
             <para>The parameter <literal>GL_UNSIGNED_BYTE</literal> says that each component that we
                 are uploading is stored in an 8-bit unsigned byte. This, plus the pointer to the
                 data, is all OpenGL needs to read our data.</para>
                 format would use <literal>GL_R32F</literal>.</para>
             <para>Note that this perfectly matches the texture data that we generated. We tell
                 OpenGL to make the texture store unsigned normalized 8-bit integers, and we provide
-                unsigned normalized 8-bit integers.</para>
+                unsigned normalized 8-bit integers as the input data.</para>
             <para>This is not strictly necessary. We could have used <literal>GL_R16</literal> as
                 our format instead. OpenGL would have created a texture that contained 16-bit
                 unsigned normalized integers. OpenGL would then have had to convert our input data
                     <type>sampler1D</type>.</para>
             <para>The GLSL sampler type is very unusual. Indeed, it is probably best if you do not
                 think of it like a normal basic type. Think of it instead as a specific hook into
-                the shader that the user can use to supply a texture. The restrictions on sampler
-                types are:</para>
+                the shader that the user can use to supply a texture. The restrictions on variables
+                of sampler types are:</para>
             <itemizedlist>
                 <listitem>
                     <para>Samplers can only declared at the global scope as
                 creating a sampler:</para>
             <programlisting language="glsl">uniform sampler1D gaussianTexture;</programlisting>
             <para>This creates a sampler for a 1D texture type; the user cannot use any other type
-                of texture with this sampler. This sampler is used in our lighting computation
-                function:</para>
+                of texture with this sampler.</para>
+            <formalpara>
+                <title>Texture Sampling</title>
+                <para>The process of fetching data from a texture, at a particular location, is
+                    called <glossterm>sampling.</glossterm> This is done in the shader as part of
+                    the lighting computation:</para>
+            </formalpara>
             <example>
                 <title>Shader Texture Access</title>
                 <programlisting language="glsl">vec3 halfAngle = normalize(lightDir + viewDirection);
 
 gaussianTerm = cosAngIncidence != 0.0 ? gaussianTerm : 0.0;</programlisting>
             </example>
-            <para>The third line is where the texture is accessed. The value used to access a
-                texture is called a <glossterm>texture coordinate</glossterm>. Since our texture has
-                only one dimension, our texture coordinate also has one dimension. The first
-                parameter to the <function>texture</function> function is the sampler to fetch from;
-                the second parameter is the texture coordinate that determines from where in that
-                texture to fetch.</para>
+            <para>The third line is where the texture is accessed. The function
+                    <function>texture</function> accesses the texture denoted by the first parameter
+                (the sampler to fetch from). It accesses the value of the texture from the location
+                specified by the second parameter. This second parameter, the location to fetch
+                from, is called the <glossterm>texture coordinate</glossterm>. Since our texture has
+                only one dimension, our texture coordinate also has one dimension.</para>
             <para>The <function>texture</function> function for 1D textures expects the texture
                 coordinate to be normalized. This means something similar to normalizing integer
                 values. A normalized texture coordinate is a texture coordinate where the coordinate
                 Samplers use the same prefixes as <type>vec</type> types. A <type>ivec4</type>
                 represents a vector of 4 integers, while a <type>vec4</type> represents a vector of
                 4 floats. Thus, an <type>isampler1D</type> represents a texture that returns
-                integers, while a <type>sampler1D</type> is a texture that returns floats. Since our
-                texture's format uses 8-bit normalized unsigned integers, which is just a cheap way
-                to store floats, this matches everything correctly.</para>
+                integers, while a <type>sampler1D</type> is a texture that returns floats. Recall
+                that 8-bit normalized unsigned integers are just a cheap way to store floats, so
+                this matches everything correctly.</para>
         </section>
         <section>
             <title>Texture Binding</title>
-            <para>At this point, we have a texture object, an OpenGL object that holds our image
-                data with a specific format. We have a shader that contains a sampler uniform that
-                represents a texture being accessed by our shader. How do we associate a texture
-                object with a sampler in the shader?</para>
+            <para>We have a texture object, an OpenGL object that holds our image data with a
+                specific format. We have a shader that contains a sampler uniform that represents a
+                texture being accessed by our shader. How do we associate a texture object with a
+                sampler in the shader?</para>
             <para>Although the API is slightly more obfuscated due to legacy issues, this
-                association is made essentially the same way as UBOs.</para>
+                association is made essentially the same way as with uniform buffer objects.</para>
             <para>The OpenGL context has an array of slots called <glossterm>texture image
                     units</glossterm>, also known as <glossterm>image units</glossterm> or
                     <glossterm>texture units</glossterm>. Each image unit represents a single
             <para>Though the idea is essentially the same, there are many API differences between
                 the UBO mechanism and the texture mechanism. We will start with setting the sampler
                 uniform to an image unit.</para>
-            <para>With UBOs, this used a different API from regular uniforms. With texture objects,
-                it does not:</para>
+            <para>With UBOs, this used a different API from regular uniforms. Because samplers are
+                actual uniforms, the sampler API is just the uniform API:</para>
             <programlisting language="cpp">GLuint gaussianTextureUnif = glGetUniformLocation(data.theProgram, "gaussianTexture");
 glUseProgram(data.theProgram);
 glUniform1i(gaussianTextureUnif, g_gaussTexUnit);</programlisting>
                 coordinates should be clamped to the range of the texture.</para>
             <para>OpenGL names the components of the texture coordinate <quote>strq</quote> rather
                 than <quote>xyzw</quote> or <quote>uvw</quote> as is common. Indeed, OpenGL has two
-                different names for the components: <quote>strq</quote> is used in the API, but
-                    <quote>stpq</quote> is used in shaders. Much like <quote>rgba</quote>, you can
-                use <quote>stpq</quote> as swizzle selectors for any vector instead of the
+                different names for the components: <quote>strq</quote> is used in the main API, but
+                    <quote>stpq</quote> is used in GLSL shaders. Much like <quote>rgba</quote>, you
+                can use <quote>stpq</quote> as swizzle selectors for any vector instead of the
                 traditional <quote>xyzw</quote>.</para>
             <note>
                 <para>The reason for the odd naming is that OpenGL tries to keep vector suffixes
                     from conflicting. <quote>uvw</quote> does not work because <quote>w</quote> is
-                    already part of the <quote>xyzw</quote> suffix. In GLSL, <quote>strq</quote>
-                    conflicts with <quote>rgba</quote>, so they had to go with <quote>stpq</quote>
-                    instead.</para>
+                    already part of the <quote>xyzw</quote> suffix. In GLSL, the <quote>r</quote> in
+                        <quote>strq</quote> conflicts with <quote>rgba</quote>, so they had to go
+                    with <quote>stpq</quote> instead.</para>
             </note>
             <para>The <literal>GL_TEXTURE_WRAP_S</literal> parameter defines how the
                     <quote>s</quote> component of the texture coordinate will be adjusted if it
             <note>
                 <para>Technically, we do not have to use a sampler object. The parameters we use for
                     samplers could have been set into the texture object directly with
-                    glTexParameter. Sampler objects have a lot of advantages over setting the value
-                    in the texture, and binding a sampler object overrides parameters set in the
-                    texture. There are still some parameters that must be in the texture, and those
-                    are not overridden by the sampler object.</para>
+                        <function>glTexParameter</function>. Sampler objects have a lot of
+                    advantages over setting the value in the texture, and binding a sampler object
+                    overrides parameters set in the texture. There are still some parameters that
+                    must be in the texture object, and those are not overridden by the sampler
+                    object.</para>
             </note>
         </section>
         <section>
                 now.</para>
             <para>Previously, we assumed that the specular shininess was a fixed value for the
                 entire surface. Now that our shininess values can come from a texture, this is not
-                the case. With the fixed shininess, we had a function that took one variable: the
+                the case. With the fixed shininess, we had a function that took one parameter: the
                 dot-product of the half-angle vector with the normal. But with a variable shininess,
-                we have a function of two variables. Functions of two variables are often called
+                we have a function of two parameters. Functions of two variables are often called
                     <quote>two dimensional.</quote></para>
             <para>It is therefore not surprising that we model such a function with a
                 two-dimensional texture. The S texture coordinate represents the dot-product, while
                 </glossdef>
             </glossentry>
             <glossentry>
-                <glossterm>sampling</glossterm>
-                <glossdef>
-                    <para>The process of accessing a particular memory location in one of the images
-                        of a texture.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
                 <glossterm>image</glossterm>
                 <glossdef>
                     <para>An array of data of a particular dimensionality. Images can be 1D, 2D, or
-                        3D in size.</para>
+                        3D in size. The points of data in an image are 4-vector values, which can be
+                        floating point or integers.</para>
                 </glossdef>
             </glossentry>
             <glossentry>
             <glossentry>
                 <glossterm>texel</glossterm>
                 <glossdef>
-                    <para>A pixel within a texture. Used to distinguish between a pixel in a
-                        destination image.</para>
+                    <para>A pixel within a texture image. Used to distinguish between a pixel in a
+                        destination image and pixels in texture images.</para>
                 </glossdef>
             </glossentry>
             <glossentry>
                 </glossdef>
             </glossentry>
             <glossentry>
+                <glossterm>sampling</glossterm>
+                <glossdef>
+                    <para>The process of accessing data from one or more of the images of the
+                        texture, using a specific texture coordinate.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
                 <glossterm>texture coordinate</glossterm>
                 <glossdef>
                     <para>A value that is used to access locations within a texture. Each texture
                         type defines what dimensionality of texture coordinate it takes (note that
-                        the texture type may define a different dimensionality from the image
-                        dimensionality). Texture coordinates are often normalized on the range [0,
-                        1]. This allows texture coordinates to ignore the size of the specific
-                        texture they are used with.</para>
+                        the texture type may define a different texture coordinate dimensionality
+                        from the image dimensionality). Texture coordinates are often normalized on
+                        the range [0, 1]. This allows texture coordinates to ignore the size of the
+                        specific texture they are used with.</para>
                     <para>Texture coordinates are comprised by the S, T, R, and Q components, much
                         like regular vectors are composed of X, Y, Z, and W components. In GLSL, the
                         R component is called <quote>P</quote> instead.</para>

File Documents/Texturing/Tutorial 15.xml

                 </imageobject>
             </mediaobject>
         </figure>
-        <para>The dot represents the texture coordinate's location on the texture. The square is the
+        <para>The dot represents the texture coordinate's location on the texture. The box is the
             area that the fragment covers. The problem happens because a fragment area mapped into
             the texture's space may cover some white area and some black area. Since nearest only
             picks a single texel, which is either black or white, it does not accurately represent
                 </imageobject>
             </mediaobject>
         </figure>
-        <para>The inner square represents the nearest texels, while the outer square represents the
-            entire fragment mapped area. We can see that the value we get with nearest sampling will
-            be pure white, since the four nearest values are white. But the value we should get
-            based on the covered area is some shade of gray.</para>
+        <para>The inner box represents the nearest texels, while the outer box represents the entire
+            fragment mapped area. We can see that the value we get with nearest sampling will be
+            pure white, since the four nearest values are white. But the value we should get based
+            on the covered area is some shade of gray.</para>
         <para>In order to accurately represent this area of the texture, we would need to sample
             from more than just 4 texels. The GPU is certainly capable of detecting the fragment
             area and sampling enough values from the texture to be representative. But this would be
 
 for(int mipmapLevel = 0; mipmapLevel &lt; pImageSet->GetMipmapCount(); mipmapLevel++)
 {
-    std::auto_ptr&lt;glimg::SingleImage> pImage(pImageSet->GetImage(mipmapLevel, 0, 0));
+    glimg::SingleImage image = pImageSet->GetImage(mipmapLevel, 0, 0);
     glimg::Dimensions dims = pImage->GetDimensions();
     
     glTexImage2D(GL_TEXTURE_2D, mipmapLevel, GL_RGB8, dims.width, dims.height, 0,
-        GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pImage->GetImageData());
+        GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, image.GetImageData());
 }
 
 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
                 <function>GetDimensions</function> member of
                 <classname>glimg::SingleImage</classname> returns the size of the particular
             mipmap.</para>
-        <para>The <function>glTexImage2D</function> function takes a mipmap level as the second
-            parameter. The width and height parameters represent the size of the mipmap in question,
-            not the size of the base level.</para>
+        <para>The <function>glTexImage2D</function> function takes the mipmap level to load as the
+            second parameter. The width and height parameters represent the size of the mipmap in
+            question, not the size of the base level.</para>
         <para>Notice that the last statements have changed. The
                 <literal>GL_TEXTURE_BASE_LEVEL</literal> and <literal>GL_TEXTURE_MAX_LEVEL</literal>
             parameters tell OpenGL what mipmaps in our texture can be used. This represents a closed
 glSamplerParameteri(g_samplers[2], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);</programlisting>
         <para>The <literal>GL_LINEAR_MIPMAP_NEAREST</literal> minification filter means the
             following. For a particular call to the GLSL <function>texture</function> function, it
-            will detect which mipmap is the one that is closest to our fragment area. This detection
+            will detect which mipmap is the one that is nearest to our fragment area. This detection
             is based on the angle of the surface relative to the camera's view<footnote>
                 <para>This is a simplification; a more thorough discussion is forthcoming.</para>
             </footnote>. Then, when it samples from that mipmap, it will use linear filtering of the
-            four nearest samples within that mipmap.</para>
+            four nearest samples within that one mipmap.</para>
         <para>If you press the <keycap>3</keycap> key in the tutorial, you can see the effects of
             this filtering mode.</para>
         <figure>
         </figure>
         <para>Now we can really see where the different mipmaps are. They don't quite line up on the
             corners. But remember: this just shows the mipmap boundaries, not the texture
-            coordinates.</para>
+            coordinates themselves.</para>
         <section>
             <title>Special Texture Generation</title>
-            <para>The special mipmap viewing texture is interesting, as it shows an issue you may
-                need to work with when uploading certain textures. Alignment.</para>
+            <para>The special mipmap viewing texture is interesting, as it demonstrates an issue you
+                may need to work with when uploading certain textures: alignment.</para>
             <para>The checkerboard texture, though it only stores black and white values, actually
                 has all three color channels, plus a fourth value. Since each channel is stored as
                 8-bit unsigned normalized integers, each pixel takes up 4 * 8 or 32 bits, which is 4
             <para>OpenGL actually allows all combinations of <literal>NEAREST</literal> and
                     <literal>LINEAR</literal> in minification filtering. Using nearest filtering
                 within a mipmap level while linearly filtering between levels
-                    (<literal>GL_NEAREST_MIPMAP_LINEAR</literal>) is not terribly useful
-                however.</para>
+                    (<literal>GL_NEAREST_MIPMAP_LINEAR</literal>) is possible but not terribly
+                useful in practice.</para>
             <sidebar>
                 <title>Filtering Nomenclature</title>
                 <para>If you are familiar with texture filtering from other sources, you may have
                         <literal>GL_LINEAR_MIPMAP_LINEAR</literal> always has a well-defined meaning
                     regardless of the texture's type.</para>
                 <para>Unlike geometry shaders, which ought to have been called primitive shaders,
-                    OpenGL does not enshrine this misnomer into its API. There is no
-                        <literal>GL_TRILINEAR_FILTERING</literal> enum. Therefore, in this book, we
-                    can and will use the proper terms for these.</para>
+                    OpenGL does not enshrine this common misnomer into its API. There is no
+                        <literal>GL_TRILINEAR</literal> enum. Therefore, in this book, we can and
+                    will use the proper terms for these.</para>
             </sidebar>
         </section>
     </section>
                 </imageobject>
             </mediaobject>
         </figure>
-        <para>The large square represents the effective filtering box, while the smaller area is the
-            one that we are actually sampling from. Mipmap filtering can often combine texel values
-            from outside of the sample area, and in this particularly degenerate case, it pulls in
-            texel values from very far outside of the sample area.</para>
+        <para>The large square represents the effective filtering box, while the diagonal area is
+            the one that we are actually sampling from. Mipmap filtering can often combine texel
+            values from outside of the sample area, and in this particularly degenerate case, it
+            pulls in texel values from very far outside of the sample area.</para>
         <para>This happens when the filter box is not a square. A square filter box is said to be
             isotropic: uniform in all directions. Therefore, a non-square filter box is anisotropic.
             Filtering that takes into account the anisotropic nature of a particular filter box is
         <para>Very carefully.</para>
         <para>Imagine a 2x2 pixel area of the screen. Now imagine that four fragment shaders, all
             from the same triangle, are executing for that screen area. Since the fragment shaders
-            are all guaranteed to have the same uniforms and the same code, the only thing that is
-            different is the fragment inputs. And because they are executing the same code, you can
-            conceive of them executing in lockstep. That is, each of them executes the same
-            instruction, on their individual dataset, at the same time.</para>
+            from the same triangle are all guaranteed to have the same uniforms and the same code,
+            the only thing that is different among them is the fragment inputs. And because they are
+            executing the same code, you can conceive of them executing in lockstep. That is, each
+            of them executes the same instruction, on their individual dataset, at the same
+            time.</para>
         <para>Under that assumption, for any particular value in a fragment shader, you can pick the
             corresponding 3 other values in the other fragment shaders executing alongside it. If
             that value is based solely on uniform or constant data, then each shader will have the
-            same value. But if it is based in part on input values, then each shader may have a
-            different value, based on how it was computed and what those inputs were.</para>
+            same value. But if it is based on input values (in part or in whole), then each shader
+            may have a different value, based on how it was computed and what those inputs
+            were.</para>
         <para>So, let's look at the texture coordinate value; the particular value used to access
-            the texture. Each shader has one. If that value is associated with the position of the
-            object, via perspective-correct interpolation and so forth, then the
+            the texture. Each shader has one. If that value is associated with the triangle's
+            vertices, via perspective-correct interpolation and so forth, then the
                 <emphasis>difference</emphasis> between the shaders' values will represent the
             window space geometry of the triangle. There are two dimensions for a difference, and
             therefore there are two differences: the difference in the window space X axis, and the
             that you have 4 fragment shaders all running in lock-step. There are two circumstances
             where that might not happen.</para>
         <para>The most obvious is on the edge of a triangle, where a 2x2 block of neighboring
-            fragments is not possible without being outside of the fragment. This case is actually
-            trivially covered by GPUs. No matter what, the GPU will rasterize each triangle in 2x2
-            blocks. Even if some of those blocks are not actually part of the triangle of interest,
-            they will still get fragment shader time. This may seem inefficient, but it's reasonable
-            enough in cases where triangles are not incredibly tiny or thin, which is quite often.
-            The results produced by fragment shaders outside of the triangle are discarded.</para>
+            fragments is not possible without being outside of the triangle area. This case is
+            actually trivially covered by GPUs. No matter what, the GPU will rasterize each triangle
+            in 2x2 blocks. Even if some of those blocks are not actually part of the triangle of
+            interest, they will still get fragment shader time. This may seem inefficient, but it's
+            reasonable enough in cases where triangles are not incredibly tiny or thin, which is
+            quite often. The results produced by fragment shaders outside of the triangle are simply
+            discarded.</para>
         <para>The other circumstance is through deliberate user intervention. Each fragment shader
             running in lockstep has the same uniforms but different inputs. Since they have
             different inputs, it is possible for them to execute a conditional branch based on these
             different code. The 4 fragment shaders are no longer in lock-step. How does the GPU
             handle it?</para>
         <para>Well... it doesn't. Dealing with this requires manual user intervention, and it is a
-            topic we will discuss later. Suffice it to say, it screws everything up.</para>
+            topic we will discuss later. Suffice it to say, it makes everything complicated.</para>
     </section>
     <section>
         <?dbhtml filename="Tut15 Performace.html" ?>
         <title>Performance</title>
         <para>Mipmapping has some unexpected performance characteristics. A texture with a full
             mipmap pyramid will take up ~33% more space than just the base level. So there is some
-            memory overhead. The unexpected part is that this is actually a memory vs. performance
+            memory overhead. The unexpected part is that this is actually a memory vs. speed
             tradeoff, as mipmapping usually improves performance.</para>
         <para>If a texture is going to be minified significantly, providing mipmaps is a performance
             benefit. The reason is this: for a highly minified texture, the texture accesses for
             <listitem>
                 <para>Visual artifacts can appear on objects that have textures mapped to them due
                     to the discrete nature of textures. These artifacts are most pronounced when the
-                    texture's apparent size is larger than its actual size or smaller.</para>
+                    texture's mapped size is larger or smaller than its actual size.</para>
             </listitem>
             <listitem>
                 <para>Filtering techniques can reduce these artifacts, transforming visual popping

File Documents/Texturing/Tutorial 16.xml

                     surface.</para>
             </listitem>
         </itemizedlist>
-        <para>Without this knowledge, one could not effectively use those textures. It is vital to
-            know what data a texture stores and what its texture coordinates mean.</para>
+        <para>It is vital to know what data a texture stores and what its texture coordinates mean.
+            Without this knowledge, one could not effectively use those textures.</para>
         <para>Earlier, we discussed how important colors in a linear colorspace was to getting
             accurate color reproduction in lighting and rendering. Gamma correction was applied to
             the output color, to map the linear RGB values to the gamma-correct RGB values the
             values.</para>
         <para>This means that the color values have <emphasis>already been</emphasis> gamma
             corrected. They cannot be in a linear colorspace, because the person creating the image
-            selected colors based on their appearance. Since the appearance of a color is affected
-            by the non-linearity of the display, the texture artist was effectively selected
-            post-gamma corrected color values. To put it simply, the colors in the texture are
-            already in a non-linear color space.</para>
+            selected colors based the colors on their appearance. Since the appearance of a color is
+            affected by the non-linearity of the display, the texture artist was effectively
+            selected post-gamma corrected color values. To put it simply, the colors in the texture
+            are already in a non-linear color space.</para>
         <para>Since the top rectangle does not use gamma correction, it is simply passing the
             pre-gamma corrected color values to the display. It simply works itself out. The bottom
             rectangle effectively performs gamma correction twice.</para>
             surface as part of the lighting equation, then there is a major problem. The color
             values retrieved from the texture are non-linear, and all of our lighting equations
                 <emphasis>need</emphasis> the input values to be linear.</para>
-        <para>We could gamma uncorrect the texture values manually, either at load time or in the
+        <para>We could un-gamma correct the texture values manually, either at load time or in the
             shader. But that is entirely unnecessary and wasteful. Instead, we can just tell OpenGL
             the truth: that the texture is not in a linear colorspace.</para>
         <para>Virtually every image editing program you will ever encounter, from the almighty
         </figure>
         <para>This still looks different from the last tutorial. Which naturally tells us that not
             rendering with gamma correction before was actually a problem, as this version looks
-            much better. The take-home point here is that ensuring linearity in all stages of the
-            pipeline is always important. This includes mipmap generation.</para>
+            much better. The grey blends much better with the checkerboard, as the grey is now
+            correctly halfway between white and black. The take-home point here is that ensuring
+            linearity in all stages of the pipeline is always important. This includes mipmap
+            generation.</para>
     </section>
     <section>
         <?dbhtml filename="Tut16 Free Gamma Correction.html" ?>

File Documents/Tutorial Documents.xpr

     </meta>
     <projectTree name="Tutorial%20Documents.xpr">
         <folder name="1_Basics">
+            <file name="Basics.xml"/>
             <file name="Basics/Tutorial%2000.xml"/>
             <file name="Basics/Tutorial%2001.xml"/>
             <file name="Basics/Tutorial%2002.xml"/>
         </folder>
         <folder name="2_Positioning">
+            <file name="Positioning.xml"/>
             <file name="Positioning/Tutorial%2003.xml"/>
             <file name="Positioning/Tutorial%2004.xml"/>
             <file name="Positioning/Tutorial%2005.xml"/>
             <file name="Positioning/Tutorial%2008.xml"/>
         </folder>
         <folder name="3_Illumination">
+            <file name="Illumination.xml"/>
             <file name="Illumination/Tutorial%2009.xml"/>
             <file name="Illumination/Tutorial%2010.xml"/>
             <file name="Illumination/Tutorial%2011.xml"/>
             <file name="chunked.css"/>
             <file name="standard.css"/>
         </folder>
+        <file name="Advanced%20Lighting.xml"/>
         <file name="Building%20the%20Tutorials.xml"/>
         <file name="cssDoc.txt"/>
+        <file name="Framebuffer.xml"/>
         <file name="meshFormat.rnc"/>
         <file name="Outline.xml"/>
         <file name="preface.xml"/>

File Documents/Tutorials.xml

     <info>
         <title>Learning Modern 3D Graphics Programming</title>
         <copyright>
-            <year>2011</year>
+            <year>2012</year>
             <holder>Jason L. McKesson</holder>
         </copyright>
         <author>
     </info>
     <xi:include href="preface.xml"/>
     <xi:include href="Building the Tutorials.xml"/>
-    <part>
-        <?dbhtml filename="Basics.html" dir="Basics"?>
-        <info>
-            <title>The Basics</title>
-        </info>
-        <partintro>
-            <para>These tutorials involve the most basic operations for OpenGL. They deal with the
-                core of the OpenGL pipeline. These provide core information about how OpenGL works,
-                how its pipeline works, what the basic flow of information is within OpenGL.</para>
-        </partintro>
-        <xi:include href="Basics/tutorial 00.xml"/>
-        <xi:include href="Basics/tutorial 01.xml"/>
-        <xi:include href="Basics/tutorial 02.xml"/>
-    </part>
-    <part>
-        <?dbhtml filename="Positioning.html" dir="Positioning" ?>
-        <info>
-            <title>Positioning</title>
-        </info>
-        <partintro>
-            <para>These tutorials give the reader information about how objects are positioned in 3D
-                graphics and OpenGL. These deal with transforming the position of objects, as well
-                as doing what is necessary to make those objects appear as though they are in a
-                3-dimensional space.</para>
-        </partintro>
-        <xi:include href="Positioning/tutorial 03.xml"/>
-        <xi:include href="Positioning/tutorial 04.xml"/>
-        <xi:include href="Positioning/tutorial 05.xml"/>
-        <xi:include href="Positioning/tutorial 06.xml"/>
-        <xi:include href="Positioning/tutorial 07.xml"/>
-        <xi:include href="Positioning/tutorial 08.xml"/>
-    </part>
-    <part>
-        <?dbhtml filename="Illumination.html" dir="Illumination" ?>
-        <info>
-            <title>Illumination</title>
-        </info>
-        <partintro>
-            <para>One of the most important aspects of rendering is lighting. Thus far, all of our
-                objects have had a color that is entirely part of the mesh data, pulled from a
-                uniform variable, or computed in an arbitrary way. This makes all of our objects
-                look very flat and unrealistic.</para>
-            <para>Properly modeling the interaction between light and a surface is vital in creating
-                a convincing world. Lighting defines how we see and understand shapes to a large
-                degree. This is the reason why the objects we have used thus far look fairly flat. A
-                curved surface appears curved to us because of how the light plays over the surface.
-                The same goes for a flat surface. Without this visual hinting, surfaces appear flat
-                even when they are modeled with many triangles and yield a seemingly-curved
-                polygonal mesh.</para>
-            <para>A proper lighting model makes objects appear real. A poor or inconsistent lighting
-                model shows the virtual world to be the forgery that it is. The tutorials in this
-                section will introduce some light/surface models and explain how to implement
-                them.</para>
-        </partintro>
-        <xi:include href="Illumination/tutorial 09.xml"/>
-        <xi:include href="Illumination/tutorial 10.xml"/>
-        <xi:include href="Illumination/tutorial 11.xml"/>
-        <xi:include href="Illumination/tutorial 12.xml"/>
-        <xi:include href="Illumination/tutorial 13.xml"/>
-    </part>
+    <xi:include href="Basics.xml"/>
+    <xi:include href="Positioning.xml"/>
+    <xi:include href="Illumination.xml"/>
     <xi:include href="Texturing.xml"/>
-    <part>
-        <?dbhtml filename="Framebuffer.html" dir="Framebuffer" ?>
-        <title>Framebuffer</title>
-        <partintro>
-            <para>Render targets and framebuffer blending are key components to many advanced
-                effects. These tutorials will cover many per-framebuffer operations, from blending
-                to render targets.</para>
-        </partintro>
-    </part>
-    <part>
-        <?dbhtml filename="Advanced Lighting.html" dir="Adv Lighting" ?>
-        <title>Advanced Lighting</title>
-        <partintro>
-            <para>Simple diffuse lighting and directional shadows are useful, but better, more
-                effective lighting models and patterns exist. These tutorials will explore those,
-                from Phong lighting to reflections to HDR and blooming.</para>
-        </partintro>
-    </part>
+    <xi:include href="Framebuffer.xml"/>
+    <xi:include href="Advanced Lighting.xml"/>
     <!--<xi:include href="Optimization.xml"/>-->
     <xi:include href="History of Graphics Hardware.xml"/>
     <xi:include href="Getting Started.xml"/>

File Documents/preface.xml

     <title>About this Book</title>
     <para>Three dimensional graphics hardware is fast becoming, not merely a staple of computer
         systems, but an indispensable component. Many operating systems directly use and even
-        require some degree of 3D rendering hardware. Even in the increasingly relevant mobile
+        require some degree of 3D rendering hardware. Even in the increasingly important mobile
         computing space, 3D graphics hardware is a standard feature of all but the lowest power
         devices.</para>
     <para>Understanding how to make the most of that hardware is a difficult challenge, particularly