Commits

Jason McKesson committed 735ef54 Merge

Merge

Comments (0)

Files changed (11)

Documents/Getting Started.xml

                     It provides a way to create windows or full-screen displays. It provides ways to
                     get keyboard and mouse input.</para>
             </formalpara>
-            <para>The difference between them is that, while FreeGLUT owns the message processing
-                loop, GLFW does not. GLFW requires that the user poll it to process messages. This
-                allows the user to maintain reasonably strict timings for rendering. This makes GLFW
-                programs a bit more complicated than FreeGLUT ones (which is why these tutorials use
-                FreeGLUT), it does mean that GLFW would be useful in serious applications.</para>
+            <para>The biggest difference between them is that, while FreeGLUT owns the message
+                processing loop, GLFW does not. GLFW requires that the user poll it to process
+                messages. This allows the user to maintain reasonably strict timings for rendering.
+                While this makes GLFW programs a bit more complicated than FreeGLUT ones (which is
+                why these tutorials use FreeGLUT), it does mean that GLFW would be more useful in
+                serious applications.</para>
+            <para>GLFW also provides more robust input support as well as </para>
             <para>GLFW uses the zLib license.</para>
             <formalpara>
                 <title>Multimedia Libraries</title>

Documents/History of Graphics Hardware.xml

             (again, for its day).</para>
         <para>The functionality of this card was quite bare-bones from a modern perspective.
             Obviously there was no concept of shaders of any kind. Indeed, it did not even have
-            vertex transformation; the Voodoo Graphics pipeline begins with clip-space values. This
+            vertex transformation; the Voodoo Graphics pipeline began with clip-space values. This
             required the CPU to do vertex transformations. This hardware was effectively just a
             triangle rasterizer.</para>
         <para>That being said, it was quite good for its day. As inputs to its rasterization
             the texture's alpha value. The alpha of the output was controlled with a separate math
             function, thus allowing the user to generate the alpha with different math than the RGB
             portion of the output color. This was the sum total of its fragment processing.</para>
-        <para>It even had framebuffer blending support. Its framebuffer could even support a
+        <para>It had framebuffer blending support. Its framebuffer could even support a
             destination alpha value, though you had to give up having a depth buffer to get it.
             Probably not a good tradeoff. Outside of that issue, its blending support was superior
             even to OpenGL 1.1. It could use different source and destination factors for the alpha
             the evolution of graphics hardware.</para>
         <para>Like other graphics cards of the day, the TNT hardware had no vertex processing.
             Vertex data was in clip-space, as normal, so the CPU had to do all of the transformation
-            and lighting. Where the TNT shone was in its fragment processing.</para>
-        <para>The power of the TNT is in it's name; TNT stands for
-                <acronym>T</acronym>wi<acronym>N</acronym>
-            <acronym>T</acronym>exel. Where other graphics cards could only allow a triangle to use
-            a single texture, the TNT allowed it to use two.</para>
-        <para>This meant that its vertex input data was expanded. Two textures meant two texture
-            coordinates, since each texture coordinate was directly bound to a particular texture.
-            While they were allowing two of things, they also allowed for two per-vertex colors. The
-            idea here has to do with lighting equations.</para>
+            and lighting. Where the TNT shone was in its fragment processing. The power of the TNT
+            is in it's name; TNT stands for <acronym>T</acronym>wi<acronym>N</acronym>
+            <acronym>T</acronym>exel. It could access from two textures at once. And while the
+            Voodoo II could do that as well, the TNT had much more flexibility to its fragment
+            processing pipeline.</para>
+        <para>In order to accomidate two textures, the vertex input was expanded. Two textures meant
+            two texture coordinates, since each texture coordinate was directly bound to a
+            particular texture. While they were allowing two of things, NVIDIA also allowed for two
+            per-vertex colors. The idea here has to do with lighting equations.</para>
         <para>For regular diffuse lighting, the CPU-computed color would simply be dot(N, L),
             possibly with attenuation applied. Indeed, it could be any complicated diffuse lighting
             function, since it was all on the CPU. This diffuse light intensity would be multiplied
                 a selling point (no more having to manually sort blended objects). After rendering
                 that tile, it moves on to the next. These operations can of course be executed in
                 parallel; you can have multiple tiles being rasterized at the same time.</para>
-            <para>The idea behind this to avoid having large image buffers. You only need a few 8x8
-                depth buffers, so you can use very fast, on-chip memory for it. Rather than having
-                to deal with caches, DRAM, and large bandwidth memory channels, you just have a
-                small block of memory where you do all of your logic. You still need memory for
+            <para>The idea behind this is to avoid having large image buffers. You only need a few
+                8x8 depth buffers, so you can use very fast, on-chip memory for it. Rather than
+                having to deal with caches, DRAM, and large bandwidth memory channels, you just have
+                a small block of memory where you do all of your logic. You still need memory for
                 textures and the output image, but your bandwidth needs can be devoted solely to
                 textures.</para>
             <para>For a time, these cards were competitive with the other graphics chip makers.
         <?dbhtml filename="History Unified.html" ?>
         <title>Modern Unification</title>
         <para>Welcome to the modern era. All of the examples in this book are designed on and for
-            this era of hardware, though some of them could run on older ones. The release of the
-            Radeon HD 2000 and GeForce 8000 series cards in 2006 represented unification in more
-            ways than one.</para>
+            this era of hardware, though some of them could run on older ones with some alteration.
+            The release of the Radeon HD 2000 and GeForce 8000 series cards in 2006 represented
+            unification in more ways than one.</para>
         <para>With the prior generations, fragment hardware had certain platform-specific
             peculiarities. While the API kinks were mostly ironed out with the development of proper
             shading languages, there were still differences in the behavior of hardware. While 4
         </itemizedlist>
         <para>Various other limitations were expanded as well.</para>
         <sidebar>
-            <title>Tessellation</title>
+            <title>Post-Modern</title>
             <para>This was not the end of hardware evolution; there has been hardware released in
                 recent years  The Radeon HD 5000 and GeForce GT 400 series and above have increased
                 rendering features. They're just not as big of a difference compared to what came
                 before.</para>
-            <para>The biggest new feature in this hardware is tessellation, the ability to take
-                triangles output from a vertex shader and split them into new triangles based on
-                arbitrary (mostly) shader logic. This sounds like what geometry shaders can do, but
-                it is different.</para>
+            <para>One of the biggest new feature in this hardware is tessellation, the ability to
+                take triangles output from a vertex shader and split them into new triangles based
+                on arbitrary (mostly) shader logic. This sounds like what geometry shaders can do,
+                but it is different.</para>
             <para>Tessellation is actually something that ATI toyed around with for years. The
                 Radeon 9700 had tessellation support with something they called PN triangles. This
                 was very automated and not particularly useful. The entire Radeon HD 2000-4000 cards
                 primitive, based on the values of the primitive being tessellated. The geometry
                 shader still exists; it is executed after the final tessellation shader
                 stage.</para>
-            <para>Tessellation is not covered in this book for a few reasons. First, there is not as
-                much hardware out there that supports it. Sticking to OpenGL 3.3 meant casting a
-                wider net; requiring OpenGL 4.1 (which includes tessellation) would have meant fewer
-                people could run those tutorials.</para>
-            <para>Second, tessellation is not that important. That's not to say that it is not
-                important or a worthwhile feature. But it really is not something that matters a
-                great deal.</para>
+            <para>Another feature is the ability to have a shader arbitrarily read
+                    <emphasis>and</emphasis> write to images in textures. This is not merely
+                sampling from a texture; it uses a different interface, and it means very different
+                things. This form of image data access breaks many of the rules around OpenGL, and
+                it is very easy to use the feature wrong.</para>
+            <para>These are not covered in this book for a few reasons. First, there is not as much
+                hardware out there that supports it (though this is increasing daily). Sticking to
+                OpenGL 3.3 meant casting a wider net; requiring OpenGL 4.2 (which includes
+                tessellation) would have meant fewer people could run those tutorials.</para>
+            <para>Second, these features are quite complicated to use. Any discussion of
+                tessellation would require discussing tessellation algorithms, which are all quite
+                complicated. Any discussion of image reading/writing would require talking about
+                shader hardware at a level of depth that is pretty well beyond the beginner
+                level.</para>
         </sidebar>
     </section>
 </appendix>

Documents/Illumination/Tutorial 09.xml

                 surface really is curved, we need to do something else.</para>
             <para>Instead of using the triangle's normal, we can assign to each vertex the normal
                 that it <emphasis>would</emphasis> have had on the surface it is approximating. That
-                is, while the mesh is an approximating, the normal for a vertex is the actual normal
+                is, while the mesh is an approximation, the normal for a vertex is the actual normal
                 for that surface. This actually works out surprisingly well.</para>
             <para>This means that we must add to the vertex's information. In past tutorials, we
                 have had a position and sometimes a color. To that information, we add a normal. So
                 transforming positions, the fourth component was 1.0; this was used so that the
                 translation component of the matrix transformation would be added to each
                 position.</para>
-            <para>Vectors represent directions, not absolute positions. And while rotating or
+            <para>Normals represent directions, not absolute positions. And while rotating or
                 scaling a direction is a reasonable operation, translating it is not. Now, we could
                 just adjust the matrix to remove all translations before transforming our light into
                 camera space. But that's highly unnecessary; we can simply put 0.0 in the fourth
         <para>In the pure-diffuse case, the light intensity is full white. But in the ambient case,
             we deliberately set the diffuse intensity to less than full white. This is very
             intensional.</para>
-        <para>We will talk more about this issue in the next future, but it is very critical that
+        <para>We will talk more about this issue in the near future, but it is very critical that
             light intensity values not exceed 1.0. This includes <emphasis>combined</emphasis>
             lighting intensity values. OpenGL clamps colors that it writes to the output image to
             the range [0, 1]. So any light intensity that exceeds 1.0, whether alone or combined

Documents/Illumination/Tutorial 10.xml

             </mediaobject>
         </informalequation>
         <para>This equation computes physically realistic light attenuation for point-lights. But it
-            often does not look very good. Lights seem to have a much sharper intensity falloff than
+            often does not look very good. The equation tends to create a sharper intensity falloff than
             one would expect.</para>
         <para>There is a reason for this, but it is not one we are ready to get into quite yet. What
             is often done is to simply use the inverse rather than the inverse-square of the
                     physically accurate, but it can look reasonably good.</para>
                 <para>We simply do linear interpolation based on the distance. When the distance is
                     0, the light has full intensity. When the distance is beyond a given distance,
-                    the maximum light range (which varies per-light), the intensity is 1.</para>
+                    the maximum light range (which varies per-light), the intensity is 0.</para>
                 <para>Note that <quote>reasonably good</quote> depends on your needs. The closer you
                     get in other ways to providing physically accurate lighting, the closer you get
                     to photorealism, the less you can rely on less accurate phenomena. It does no

Documents/Illumination/Tutorial 12.xml

         <para>The biggest thing here is that we want the scene to dynamically change lighting
             levels. Specifically, we want a full day/night cycle. The sun will sink, gradually
             losing intensity until it has none. There, it will remain until the dawn of the next
-            day, where it will gain strength until full and rise again. The other lights should be
+            day, where it will gain strength and rise again. The other lights should be
             much weaker in overall intensity than the sun.</para>
         <para>One thing that this requires is a dynamic ambient lighting range. Remember that the
             ambient light is an attempt to resolve the global illumination problem: that light

Documents/Positioning/Tutorial 05.xml

                 <literal>GL_NOTEQUAL</literal>. The test function puts the incoming fragment's depth
             on the left of the equation and on the right is the depth from the depth buffer. So
             GL_LESS means that, when the incoming fragment's depth is less than the depth from the
-            depth buffer, the incoming fragment is not written.</para>
+            depth buffer, the incoming fragment is written.</para>
         <para>With the fragment depth being something that is part of a fragment's output, you might
             imagine that this is something you have to compute in a fragment shader. You certainly
             can, but the fragment's depth is normally just the window-space Z coordinate of the
                 farthest. However, if our clip-space Z values were negated, the depth of 1 would be
                 closest to the view and the depth of 0 would be farthest. Yet, if we flip the
                 direction of the depth test (GL_LESS to GL_GREATER, etc), we get the exact same
-                result. So it's really just a convention. Indeed, flipping the sign of Z and the
-                depth test was once a vital performance optimization for many games.</para>
+                result. Similarly, if we reverse the glDepthRange so that 1 is the depth zNear and 0 is the depth zFar, we get the same result if we use GL_GREATER. So it's really just a convention. Indeed, flipping the depth range and the
+                depth test every frame was once a vital performance optimization for many games.</para>
         </section>
         <section>
             <title>Rendering with Depth</title>

Documents/Positioning/Tutorial 07.xml

                 the square at the same distance from the camera as the camera would be from the
                 target point.</para>
             <para>Being able to work directly in camera space like this is also a quite useful
-                technique. It allows the object to be positions relative to the camera, so that no
+                technique. It allows the object to be positioned relative to the camera, so that no
                 matter how the camera moves relative to the world, the object will always appear
                 fixed. It will also always appear facing the camera.</para>
         </section>

Documents/Texturing.xml

             teach you about textures is that they are not <emphasis>that</emphasis> important. What
             you have learned is how to think about solving graphics problems without
             textures.</para>
-        <para>Many graphics texts overemphasize the importance of textures; most of them introduce
-            textures before even talking about lighting. This is mostly a legacy of the past. In the
-            older days, before the availability real programmable hardware, you needed textures to
-            do anything of real importance in graphics rendering. Textures were used to simulate
-            lighting and various other effects. If you wanted to do anything like per-fragment
-            lighting, you had to use textures to do it.</para>
+        <para>Many graphics texts overemphasize the importance of textures. This is mostly a legacy
+            of the past. In the older days, before the availability of real programmable hardware,
+            you needed textures to do anything of real importance in graphics rendering. Textures
+            were used to simulate lighting and various other effects. If you wanted to do anything
+            like per-fragment lighting, you had to use textures to do it.</para>
         <para>Yes, textures are important for creating details in rendered images. They are
             important for being able to vary material parameters over a polygonal surface. And they
             have value in other areas as well. But there is so much more to rendering than textures,

Documents/Texturing/Tutorial 14.xml

                 represents a texture being accessed by our shader. How do we associate a texture
                 object with a sampler in the shader?</para>
             <para>Although the API is slightly more obfuscated due to legacy issues, this
-                association is made essentially the same was as for UBOs.</para>
+                association is made essentially the same way as UBOs.</para>
             <para>The OpenGL context has an array of slots called <glossterm>texture image
                     units</glossterm>, also known as <glossterm>image units</glossterm> or
                     <glossterm>texture units</glossterm>. Each image unit represents a single

Documents/Texturing/Tutorial 15.xml

-<?xml version="1.0" encoding="UTF-8"?>
-<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
-<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
-<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
-    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
-    <?dbhtml filename="Tutorial 15.html" ?>
-    <title>Many Images</title>
-    <para>In the last tutorial, we looked at textures that were not pictures. Now, we will look at
-        textures that are pictures. However, unlike the last tutorial, where the textures
-        represented some parameter in the light equation, here, we will just be directly outputting
-        the values read from the texture.</para>
-    <sidebar>
-        <title>Graphics Fudging</title>
-        <para>Before we begin however, there is something you may need to do. When you installed
-            your graphics drivers, installed along with it was an application that allows you to
-            provide settings for your graphics driver. This affects facets about how graphics
-            applications render and so forth.</para>
-        <para>Thus far, most of those settings have been irrelevant to us because everything we have
-            done has been entirely in our control. The OpenGL specification defined exactly what
-            could and could not happen, and outside of actual driver bugs, the results we produced
-            are reproducible and exact across hardware.</para>
-        <para>That is no longer the case as of this tutorial.</para>
-        <para>Texturing has long been a place where graphics drivers have been given room to play
-            and fudge results. The OpenGL specification plays fast-and-loose with certain aspects of
-            texturing. And with the driving need for graphics card makers to have high performance
-            and high image quality, graphics driver writers can, at the behest of the user, simply
-            ignore the OpenGL spec with regard to certain aspects of texturing.</para>
-        <para>The image quality settings in your graphics driver provide control over this. They are
-            ways for you to tell graphics drivers to ignore whatever the application thinks it
-            should do and instead do things their way. That is fine for a game, but right now, we
-            are learning how things work. If the driver starts pretending that we set some parameter
-            that we clearly did not, it will taint our results and make it difficult to know what
-            parameters cause what effects.</para>
-        <para>Therefore, you will need to go into your graphics driver application and change all of
-            those setting to the value that means to do what the application says. Otherwise, the
-            visual results you get for the following code may be very different from the given
-            images. This includes settings for antialiasing.</para>
-    </sidebar>
-    <section>
-        <?dbhtml filename="Tut15 Playing Checkers.html" ?>
-        <title>Playing Checkers</title>
-        <para>We will start by drawing a single large, flat plane. The plane will have a texture of
-            a checkerboard drawn on it. The camera will hover above the plane, looking out at the
-            horizon as if the plane were the ground. This is implemented in the <phrase
-                role="propername">Many Images</phrase> tutorial project.</para>
-        <!--TODO: Picture of the Many Images tutorial, basic view.-->
-        <para>The camera is automatically controlled, though it's motion can be paused with the
-                <keycap>P</keycap> key. The other functions of the tutorial will be explained as we
-            get to them.</para>
-        <para>If you look at the <filename>BigPlane.xml</filename> file, you will find that the
-            texture coordinates are well outside of the [0, 1] range we are used to. They span from
-            [-64, 64] now, but the texture itself is only valid within the [0, 1] range.</para>
-        <para>Recall from the last tutorial that the sampler object has a parameter that controls
-            what texture coordinates outside of the [0, 1] range mean. This tutorial uses many
-            samplers, but all of our samplers use the same S and T wrap modes:</para>
-        <programlisting>glSamplerParameteri(g_samplers[samplerIx], GL_TEXTURE_WRAP_S, GL_REPEAT);
-glSamplerParameteri(g_samplers[samplerIx], GL_TEXTURE_WRAP_T, GL_REPEAT);</programlisting>
-        <para>We set the S and T wrap modes to <literal>GL_REPEAT</literal>. This means that values
-            outside of the [0, 1] range wrap around to values within the range. So a texture
-            coordinate of 1.1 becomes 0.1, and a texture coordinate of -0.1 becomes 0.9. The idea is
-            to make it as though the texture were infinitely large, with infinitely many copies
-            repeating over and over.</para>
-        <note>
-            <para>It is perfectly legitimate to set the texture coordinate wrapping modes
-                differently for different coordinates. Well, usually; this does not work for certain
-                texture types, but only because they take texture coordinates with special meanings.
-                For them, the wrap modes are ignored entirely.</para>
-        </note>
-        <para>You may toggle between two meshes with the <keycap>Y</keycap> key. The alternative
-            mesh is a long, square corridor.</para>
-        <para>The shaders used here are very simple. The vertex shader takes positions and texture
-            coordinates as inputs and outputs the texture coordinate directly. The fragment shader
-            takes the texture coordinate, fetches a texture with it, and writes that color value as
-            output. Not even gamma correction is used.</para>
-        <para>The texture in question is 128x128 in size, with 4 alternating black and white squares
-            on each side.</para>
-    </section>
-    <section>
-        <?dbhtml filename="Tut15 Magnification.html" ?>
-        <title>Linear Filtering</title>
-        <para>While this example certainly draws a checkerboard, you can see that there are some
-            visual issues. We will start finding solutions to the with the least obvious glitches
-            first.</para>
-        <para>Take a look at one of the squares at the very bottom of the screen. Notice how the
-            line looks jagged as it moves to the left and right. You can see the pixels of it sort
-            of crawl up and down as it shifts around on the plane.</para>
-        <para>This is caused by the discrete nature of our texture accessing. The texture
-            coordinates are all in floating-point values. The GLSL <function>texture</function>
-            function internally converts these texture coordinates to specific texel values within
-            the texture. So what value do you get if the texture coordinate lands halfway between
-            two texels?</para>
-        <para>That is governed by a process called <glossterm>texture filtering</glossterm>.
-            Filtering can happen in two directions: magnification and minification. Magnification
-            happens when the texture mapping makes the texture appear bigger than it's actual
-            resolution. If you get closer to the texture, relative to its mapping, then the texture
-            is magnified relative to its natural resolution. Minification is the opposite: when the
-            texture is being shrunken relative to its natural resolution.</para>
-        <para>In OpenGL, magnification and minification filtering are each set independently. That
-            is what the <literal>GL_TEXTURE_MAG_FILTER</literal> and
-                <literal>GL_TEXTURE_MIN_FILTER</literal> sampler parameters control. We are
-            currently using <literal>GL_NEAREST</literal> for both; this is called
-                <glossterm>nearest filtering</glossterm>. This mode means that each texture
-            coordinate picks the texel value that it is nearest to. For our checkerboard, that means
-            that we will get either black or white.</para>
-        <para>Now this may sound fine, since our texture is a checkerboard and only has two actual
-            colors. However, it is exactly this discrete sampling that gives rise to the pixel crawl
-            effect. A texture coordinate that is half-way between the white and the black is either
-            white or black; a small change in the camera causes an instant pop from black to white
-            or vice-versa.</para>
-        <para>Each fragment being rendered takes up a certain area of space on the screen: the area
-            of the destination pixel for that fragment. The texture mapping of the rendered surface
-            to the texture gives a texture coordinate for each point on the surface. But a pixel is
-            not a single, infinitely small point on the surface; it represents some finite area of
-            the surface.</para>
-        <para>Therefore, we can use the texture mapping in reverse. We can take the four corners of
-            a pixel area and find the texture coordinates from them. The area of this 4-sided
-            figure, in the space of the texture, is the area of the texture that is being mapped to
-            the screen pixel. With a perfect texture accessing system, the total color of that area
-            would be the value we get from the GLSL <function>texture</function> function.</para>
-        <figure>
-            <title>Nearest Sampling</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="NearestSampleDiag.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>The dot represents the texture coordinate's location on the texture. The square is the
-            area that the fragment covers. The problem happens because a fragment area mapped into
-            the texture's space may cover some white area and some black area. Since nearest only
-            picks a single texel, which is either black or white, it does not accurately represent
-            the mapped area of the fragment.</para>
-        <para>One obvious way to smooth out the differences is to do exactly that. Instead of
-            picking a single sample for each texture coordinate, pick the nearest 4 samples and then
-            interpolate the values based on how close they each are to the texture coordinate. To do
-            this, we set the magnification and minification filters to
-            <literal>GL_LINEAR</literal>.</para>
-        <programlisting>glSamplerParameteri(g_samplers[1], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
-glSamplerParameteri(g_samplers[1], GL_TEXTURE_MIN_FILTER, GL_LINEAR);</programlisting>
-        <para>This is called, surprisingly enough, <glossterm>linear filtering</glossterm>. In our
-            tutorial, press the <keycap>2</keycap> key to see what linear filtering looks like;
-            press <keycap>1</keycap> to go back to nearest sampling.</para>
-        <!--TODO: Picture of linear filtering.-->
-        <para>That looks much better for the squares close to the camera. It creates a bit of
-            fuzziness, but this is generally a lot easier for the viewer to tolerate than pixel
-            crawl. Human vision tends to be attracted to movement, and false movement, like dot
-            crawl, can be distracting.</para>
-    </section>
-    <section>
-        <?dbhtml filename="Tut15 Needs More Pictures.html" ?>
-        <title>Needs More Pictures</title>
-        <para>Speaking of distracting, let's talk about what is going on in the distance. When the
-            camera moves, the more distant parts of the texture look like a jumbled mess. Even when
-            the camera motion is paused, it still doesn't look like a checkerboard.</para>
-        <para>What is going on there is really simple. The way our filtering works is that, for a
-            given texture coordinate, we take either the nearest texel value, or the nearest 4
-            texels and interpolate. The problem is that, for distant areas of our surface, the
-            texture space area covered by our fragment is much larger than 4 texels across.</para>
-        <figure>
-            <title>Large Minification Sampling</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="LargeMinificDiag.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>The inner square represents the nearest texels, while the outer square represents the
-            entire fragment mapped area. We can see that the value we get with nearest sampling will
-            be pure white, since the four nearest values are white. But the value we should get
-            based on the covered area is some shade of gray.</para>
-        <para>In order to accurately represent this area of the texture, we would need to sample
-            from more than just 4 texels. The GPU would be capable of detecting the fragment area
-            and sampling enough values from the texture to be representative. But this would be
-            exceedingly expensive, both in terms of texture bandwidth and computation.</para>
-        <para>What if, instead of having to sample more texels, we had a number of smaller versions
-            of our texture? The smaller versions effectively pre-compute groups of texels. That way,
-            we could just sample 4 texels from a texture that is close enough to the size of our
-            fragment area.</para>
-        <figure>
-            <title>Mipmapped Minification Sampling</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="MipmapDiagram.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>These smaller versions of an image are called <glossterm>mipmaps</glossterm>; they are
-            also sometimes called mipmap levels. Previously, it was said that textures can store
-            multiple images. The additional images, for many texture types, are mipmaps. By
-            performing linear sampling against a lower mipmap level, we get a gray value that, while
-            not the exact color the coverage area suggests, is much closer to what we should get
-            than linear filtering on the large mipmap.</para>
-        <para>In OpenGL, mipmaps are numbered starting from 0. The 0 image is the largest mipmap,
-            what is usually considered the main texture image. When people speak of a texture having
-            a certain size, they mean the resolution of mipmap level 0. Each mipmap is half as small
-            as the previous one. So if our main image, mipmap level 0, has a size of 128x128, the
-            next mipmap, level 1, is 64x64. The next is 32x32. And so forth, down to 1x1 for the
-            smallest mipmap.</para>
-        <para>For textures that are not square (which as we saw in the previous tutorial, is
-            perfectly legitimate), the mipmap chain keeps going until all dimensions are 1. So a
-            texture who's size is 128x16 (remember: the texture's size is the size of the largest
-            mipmap) would have just as many mipmap levels as a 128x128 texture. The mipmap level 4
-            of the 128x16 texture would be 8x1; the next mipmap would be 4x1.</para>
-        <note>
-            <para>It is also perfectly legal to have texture sizes that are not powers of two. For
-                them, mipmap sizes are rounded down. So a 129x129 texture's mipmap 1 will be
-                64x64.</para>
-        </note>
-        <para>The DDS image format is one of the few image formats that actually supports storing
-            all of the mipmaps for a texture in the same file. Most image formats only allow one
-            image in a single file. The texture loading code for our 128x128 texture with mipmaps is
-            as follows:</para>
-        <example>
-            <title>DDS Texture Loading with Mipmaps</title>
-            <programlisting language="cpp">std::string filename(LOCAL_FILE_DIR);
-filename += "checker.dds";
-
-std::auto_ptr&lt;glimg::ImageSet> pImageSet(glimg::loaders::dds::LoadFromFile(filename.c_str()));
-
-glGenTextures(1, &amp;g_checkerTexture);
-glBindTexture(GL_TEXTURE_2D, g_checkerTexture);
-
-for(int mipmapLevel = 0; mipmapLevel &lt; pImageSet->GetMipmapCount(); mipmapLevel++)
-{
-    std::auto_ptr&lt;glimg::SingleImage> pImage(pImageSet->GetImage(mipmapLevel, 0, 0));
-    glimg::Dimensions dims = pImage->GetDimensions();
-    
-    glTexImage2D(GL_TEXTURE_2D, mipmapLevel, GL_RGB8, dims.width, dims.height, 0,
-        GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pImage->GetImageData());
-}
-
-glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
-glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, pImageSet->GetMipmapCount() - 1);
-glBindTexture(GL_TEXTURE_2D, 0);</programlisting>
-        </example>
-        <para>Because the file contains multiple mipmaps, we must load each one in turn. The GL
-            Image library considers each mipmap to be its own image. The
-                <function>GetDimensions</function> member of
-                <classname>glimg::SingleImage</classname> returns the size of the particular
-            mipmap.</para>
-        <para>The <function>glTexImage2D</function> function takes a mipmap level as the second
-            parameter. The width and height parameters represent the size of the mipmap in question,
-            not the size of the base level.</para>
-        <para>Notice that the last statements have changed. The
-                <literal>GL_TEXTURE_BASE_LEVEL</literal> and <literal>GL_TEXTURE_MAX_LEVEL</literal>
-            parameters tell OpenGL what mipmaps in our texture can be used. This represents a closed
-            range. Since a 128x128 texture has 8 mipmaps, we use the range [0, 7]. The base level of
-            a texture is the largest usable mipmap level, while the max level is the smallest usable
-            level. It is possible to omit some of the smaller mipmap levels.</para>
-        <para>Filtering based on mipmaps is unsurprisingly named <glossterm>mipmap
-                filtering</glossterm>. This tutorial does not load two checkerboard textures; it
-            only ever uses one checkerboard. The reason mipmaps have not been used until now is
-            because mipmap filtering was not activated. Setting the base and max level is not
-            enough; the sampler object must be told to use mipmap filtering. If it does not, then it
-            will simply use the base level.</para>
-        <para>Mipmap filtering only works for minification, since minification represents a fragment
-            area that is larger than the texture's resolution. To activate this, we use a special
-                <literal>MIPMAP</literal> mode of minification filtering.</para>
-        <programlisting>glSamplerParameteri(g_samplers[2], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
-glSamplerParameteri(g_samplers[2], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);</programlisting>
-        <para>The <literal>GL_LINEAR_MIPMAP_NEAREST</literal> minification filter means the
-            following. For a particular call to the GLSL <function>texture</function> function, it
-            will detect which mipmap is the one that is closest to our fragment area. This detection
-            is based on the angle of the surface relative to the camera's view<footnote>
-                <para>This is a simplification; a more thorough discussion is forthcoming.</para>
-            </footnote>. Then, when it samples from that mipmap, it will use linear filtering of the
-            four nearest samples within that mipmap.</para>
-        <para>If you press the <keycap>3</keycap> key in the tutorial, you can see the effects of
-            this filtering mode.</para>
-        <!--TODO: Picture of LINEAR_MIPMAP_NEAREST, #3. Hallway.-->
-        <para>That's a lot more reasonable. It isn't perfect, but it's much better than the random
-            motion in the distance that we have previously seen.</para>
-        <para>It can be difficult to truly understand the effects of mipmap filtering when using
-            normal textures and mipmaps. Therefore, if you press the <keycap>Spacebar</keycap>, the
-            tutorial will switch to a special texture. It is not loaded from a file; it is instead
-            constructed at runtime.</para>
-        <para>Normally, mipmaps are simply smaller versions of larger images, using linear filtering
-            or various other algorithms to compute a reasonable scaled down result. This special
-            texture's mipmaps are all flat colors, but each mipmap has a different color. This makes
-            it much more obvious where each mipmap is.</para>
-        <!--TODO: Picture of the special texture with hallway, linear mipmap nearest.-->
-        <para>Now we can really see where the different mipmaps are.</para>
-        <section>
-            <title>Special Texture Generation</title>
-            <para>The special mipmap viewing texture is interesting, as it shows an issue you may
-                need to work with when uploading certain textures. Alignment.</para>
-            <para>The checkerboard texture, though it only stores black and white values, actually
-                has all three color channels, plus a fourth value. Since each channel is stored as
-                8-bit unsigned normalized integers, each pixel takes up 4 * 8 or 32 bits, which is 4
-                bytes.</para>
-            <para>OpenGL image uploading and downloading is based on horizontal rows of image data.
-                Each row is expected to have a certain byte alignment. The OpenGL default is 4
-                bytes; since our pixels are 4 bytes in length, every mipmap will have a line size in
-                bytes that is a multiple of 4 bytes. Even the 1x1 mipmap level is 4 bytes in
-                size.</para>
-            <para>Note that the internal format we provide is <literal>GL_RGB8</literal>, even
-                though the components we are transferring are <literal>GL_BGRA</literal> (the A
-                being the fourth component). This means that OpenGL will, more or less, discard the
-                fourth component we upload. That is fine.</para>
-            <para>The issue with the special texture's pixel data is that it is not 4 bytes in
-                length. The function used to generate a mipmap level of the special texture is as
-                follows:</para>
-            <example>
-                <title>Special Texture Data</title>
-                <programlisting>void FillWithColor(std::vector&lt;GLubyte> &amp;buffer,
-                   GLubyte red, GLubyte green, GLubyte blue,
-                   int width, int height)
-{
-    int numTexels = width * height;
-    buffer.resize(numTexels * 3);
-    
-    std::vector&lt;GLubyte>::iterator it = buffer.begin();
-    while(it != buffer.end())
-    {
-        *it++ = red;
-        *it++ = green;
-        *it++ = blue;
-    }
-}</programlisting>
-            </example>
-            <para>This creates a texture that has 24-bit pixels; each pixel contains 3 bytes.</para>
-            <para>That is fine for any width value that is a multiple of 4. However, if the width is
-                2, then each row of pixel data will be 6 bytes long. That is not a multiple of 4 and
-                therefore breaks alignment.</para>
-            <para>Therefore, we must change the pixel alignment that OpenGL uses. The
-                    <function>LoadMipmapTexture</function> function is what generates the special
-                texture. One of the first lines is this:</para>
-            <programlisting>GLint oldAlign = 0;
-glGetIntegerv(GL_UNPACK_ALIGNMENT, &amp;oldAlign);
-glPixelStorei(GL_UNPACK_ALIGNMENT, 1);</programlisting>
-            <para>The first two lines gets the old alignment, so that we can reset it once we are
-                finished. The last line uses <function>glPixelStorei</function>
-            </para>
-            <para>Note that the GLImg library does provide an alignment value; it is part of the
-                    <classname>Dimensions</classname> structure of an image. We have simply not used
-                it yet. In the last tutorial, our row widths were aligned to 4 bytes, so there was
-                no chance of a problem. In this tutorial, our image data is 4-bytes in pixel size,
-                so it is always intrinsically aligned to 4 bytes.</para>
-            <para>That being said, you should always keep row alignment in mind, particularly when
-                dealing with mipmaps.</para>
-        </section>
-        <section>
-            <title>Filtering Between Mipmaps</title>
-            <para>Our mipmap filtering has been a dramatic improvement over previous efforts.
-                However, it does create artifacts. One of particular concern is the change between
-                mipmap levels. It is abrupt and somewhat easy to notice for a moving scene. Perhaps
-                there is a way to smooth that out.</para>
-            <para>Our current minification filtering pics a single mipmap level and selects a sample
-                from it. It would be better if we could pick the two nearest mipmap levels and blend
-                between the values fetched from the two textures. This would give us a smoother
-                transition from one mipmap level to the next.</para>
-            <para>This is done by using <literal>GL_LINEAR_MIPMAP_LINEAR</literal> minification
-                filtering. The first <literal>LINEAR</literal> represents the filtering done within
-                a single mipmap level, and the second <literal>LINEAR</literal> represents the
-                filtering done between mipmap levels.</para>
-            <para>To see this in action, press the <keycap>4</keycap> key.</para>
-            <!--TODO: Picture of linear_mipmap_linear, on a plane, with both textures side-by-side.-->
-            <para>That is an improvement. There are still issues to work out, but it is much harder
-                to see where one mipmap ends and another begins.</para>
-            <para>OpenGL actually allows all combinations of <literal>NEAREST</literal> and
-                    <literal>LINEAR</literal> in minification filtering. Using nearest filtering
-                within levels while filtering between levels
-                    (<literal>GL_NEAREST_MIPMAP_LINEAR</literal>) is not terribly useful
-                however.</para>
-            <sidebar>
-                <title>Filtering Nomenclature</title>
-                <para>If you are familiar with texture filtering from other materials, you may have
-                    heard the terms <quote>bilinear filtering</quote> and <quote>trilinear
-                        filtering</quote> before. Indeed, you may know that linear filtering between
-                    mipmap levels is commonly called trilinear filtering.</para>
-                <para>This book does not use that terminology. And for good reason: <quote>trilinear
-                        filtering</quote> is a misnomer.</para>
-                <para>To understand the problem, it is important to understand what <quote>bilinear
-                        filtering</quote> means. The <quote>bi</quote> in bilinear comes from doing
-                    linear filtering along the two axes of a 2D texture. So there is linear
-                    filtering in the S and T directions (remember: proper OpenGL nomenclature calls
-                    the texture coordinate axes S and T); since that is two directions, it is called
-                        <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes from
-                    adding a third direction of linear filtering: between mipmap levels.</para>
-                <para>Therefore, one could consider using <literal>GL_LINEAR</literal> mag and min
-                    filtering to be bilinear, and using <literal>GL_LINEAR_MIPMAP_LINEAR</literal>
-                    to be trilinear.</para>
-                <para>That's all well and good... for 2D textures. But what about for 1D textures?
-                    Since 1D textures are one dimensional, <literal>GL_LINEAR</literal> mag and min
-                    filtering only filters in one direction: S. Therefore, it would be reasonable to
-                    call 1D <literal>GL_LINEAR</literal> filtering simply <quote>linear
-                        filtering.</quote> Indeed, filtering between mipmap levels of 1D textures
-                    (yes, 1D textures can have mipmaps) would have to be called <quote>bilinear
-                        filtering.</quote></para>
-                <para>And then there are 3D textures. <literal>GL_LINEAR</literal> mag and min
-                    filtering filters in all 3 directions: S, T, and R. Therefore, that would have
-                    to be called <quote>trilinear filtering.</quote> And if you add linear mipmap
-                    filtering on top of that (yes, 3D textures can have mipmaps), it would be
-                        <quote>quadrilinear filtering.</quote></para>
-                <para>Therefore, the term <quote>trilinear filtering</quote> means absolutely
-                    nothing without knowing what the texture's type is. Whereas
-                        <literal>GL_LINEAR_MIPMAP_LINEAR</literal> always has a well-defined meaning
-                    regardless of the texture's type.</para>
-                <para>Unlike geometry shaders, which ought to have been called primitive shaders,
-                    OpenGL does not enshrine this misnomer into its API. There is no
-                        <literal>GL_TRILINEAR_FILTERING</literal> enum. Therefore, in this book, we
-                    can and will use the proper terms for these.</para>
-            </sidebar>
-        </section>
-    </section>
-    <section>
-        <?dbhtml filename="Tut15 Anisotropy.html" ?>
-        <title>Anisotropy</title>
-        <para>Linear mipmap filtering is good; it eliminates most of the fluttering and oddities in
-            the distance. The problem is that it replaces a lot of that with... grey. Mipmap-based
-            filtering works reasonably well, but it tends to over-compensate.</para>
-        <para>For example, take the diagonal chain of squares at the left or right of the screen.
-            Expand the window horizontally if you need to.</para>
-        <!--TODO: Picture of the linear_mipmap_linear with diagonal in view.-->
-        <para>Pixels that are along this diagonal should be mostly black. As they get farther and
-            farther away, the fragment area becomes more and more distorted length-wise, relative to
-            the texel area:</para>
-        <figure>
-            <title>Long Fragment Area</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="DiagonalDiagram.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>With perfect filtering, we should get a value that is mostly black. But instead, we
-            get a much lighter shade of grey. The reason has to do with the specifics of mipmapping
-            and mipmap selection.</para>
-        <para>Mipmaps are pre-filtered versions of the main texture. The problem is that they are
-            filtered in both directions equally. This is fine if the fragment area is square, but
-            for oblong shapes, mipmap selection becomes more problematic. The particular algorithm
-            used is very conservative. It selects the smallest mipmap level possible for the
-            fragment area. So long, thin areas, in terms of the values fetched by the texture
-            function, will be no different from larger areas.</para>
-        <figure>
-            <title>Long Fragment with Sample Area</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="MipmapDiagonalDiagram.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>The large square represents the effective filtering box, while the smaller area is the
-            one that we are actually sampling from. Mipmap filtering can often combine texel values
-            from outside of the sample area, and in this particularly degenerate case, it pulls in
-            texel values from very far outside of the sample area.</para>
-        <para>This happens when the filter box is not a square. A square filter box is said to be
-            isotropic: uniform in all directions. Therefore, a non-square filter box is anisotropic.
-            Filtering that takes into account the anisotropic nature of a particular filter box is
-            naturally called <glossterm>anisotropic filtering.</glossterm></para>
-        <para>The OpenGL specification is usually very particular about most things. It explains the
-            details of which mipmap is selected as well as how closeness is defined for linear
-            interpolation between mipmaps. But for anisotropic filtering, the specification is very
-            loose as to exactly how it works.</para>
-        <para>The general idea is this. The implementation will take some number of samples that
-            approximates the shape of the filter box in the texture. It will select from mipmaps,
-            but only when those mipmaps represent a closer filtered version of area being sampled.
-            Here is an example:</para>
-        <figure>
-            <title>Parallelogram Sample Area</title>
-            <mediaobject>
-                <imageobject>
-                    <imagedata fileref="ParallelogramDiag.svg"/>
-                </imageobject>
-            </mediaobject>
-        </figure>
-        <para>Some of the samples that are entirely within the sample area can use smaller mipmaps
-            to reduce the number of samples actually taken. The above image only needs four samples
-            to approximate the sample area: the three small boxes, and the larger box in the
-            center.</para>
-        <para>All of the sample values will be averaged together based on a weighting algorithm that
-            best represents that sample's contribution to the filter box. Again, this is all very
-            generally; the specific algorithms are implementation dependent.</para>
-        <para>Run the tutorial again. The <keycap>5</keycap> key turns activates a form of
-            anisotropic filtering.</para>
-        <!--TODO: Picture of anisotropic with plane.-->
-        <para>That's an improvement.</para>
-        <section>
-            <title>Sample Control</title>
-            <para>Anisotropic filtering requires taking multiple samples from the various mipmaps.
-                The control on the quality of anisotropic filtering is in limiting the number of
-                samples used. Raising the maximum number of samples taken will generally make the
-                result look better, but it will also </para>
-            <para>This is done by setting the <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal>
-                sampler parameter:</para>
-            <programlisting language="cpp">glSamplerParameteri(g_samplers[4], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
-glSamplerParameteri(g_samplers[4], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
-glSamplerParameterf(g_samplers[4], GL_TEXTURE_MAX_ANISOTROPY_EXT, 4.0f);</programlisting>
-            <para>This represents the maximum number of samples that will be taken for any texture
-                accesses through this sampler. Note that we still use linear mipmap filtering in
-                combination with anisotropic filtering. While you could theoretically use
-                anisotropic filtering without mipmaps, you will get much better performance if you
-                use it in tandem with linear mipmap filtering.</para>
-            <para>The max anisotropy is a floating point value, in part because the specific nature
-                of anisotropic filtering is left up to the hardware. But in general, you can treat
-                it like an integer value.</para>
-            <para>There is a limit to the maximum anisotropy that we can provide. This limit is
-                implementation defined; it can be queried with <function>glGetFloatv</function>,
-                since the value is a float rather than an integer. To set the max anisotropy to the
-                maximum possible value, we do this.</para>
-            <programlisting language="cpp">GLfloat maxAniso = 0.0f;
-glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &amp;maxAniso);
-
-glSamplerParameteri(g_samplers[5], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
-glSamplerParameteri(g_samplers[5], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
-glSamplerParameterf(g_samplers[5], GL_TEXTURE_MAX_ANISOTROPY_EXT, maxAniso);</programlisting>
-            <para>To see the results of this, press the <keycap>6</keycap> key.</para>
-            <!--TODO: Picture of max anisotropic with the plane.-->
-            <para>That looks pretty good now. There are still some issues out in the distance.
-                Remember that your image may not look exactly like this one, since the details of
-                anisotropic filtering are implementation specific.</para>
-            <para>You may be concerned that none of the filtering techniques, even the max
-                anisotropic one, produces perfect results. In the distance, the texture still
-                becomes a featureless grey even along the diagonal. The reason is because rendering
-                large checkerboard is perhaps one of the most difficult problems from a texture
-                filtering perspective. This becomes even worse when it is viewed edge on, as we do
-                here.</para>
-            <para>Indeed, the repeating checkerboard texture was chosen specifically because it
-                highlights the issues in a very obvious way. A more traditional diffuse color
-                texture typically looks much better with reasonable filtering applied.</para>
-        </section>
-        <section>
-            <title>A Matter of EXT</title>
-            <para>You may have noticed the <quote>EXT</quote> suffix on
-                    <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal>. This suffix means that this
-                enumerator comes from an <glossterm>OpenGL extension</glossterm>. First and
-                foremost, this means that this enumerator is not part of the OpenGL
-                Specification.</para>
-            <para>An OpenGL extension is a modification of OpenGL exposed by a particular
-                implementation. Extensions have published documents that explain how they change the
-                standard GL specification; this allows users to be able to use them correctly.
-                Because different implementations of OpenGL will implement different sets of
-                extensions, there is a mechanism for querying whether an extension is implemented.
-                This allows user code to detect the availability of certain hardware features and
-                use them or not as needed.</para>
-            <para>There are several kinds of extensions. There are proprietary extensions; these are
-                created by a particular vendor and are rarely if ever implemented by another vendor.
-                In some cases, they are based on intellectual property owned by that vendor and thus
-                cannot be implemented without explicit permission. The enums and functions for these
-                extensions end with a suffix based on the proprietor of the extension. An
-                NVIDIA-only extension would end in <quote>NV,</quote> for example.</para>
-            <para>ARB extensions are a special class of extension that is blessed by the OpenGL ARB
-                (which governs OpenGL). These are typically created as a collaboration between
-                multiple members of the ARB. Historically, they have represented functionality that
-                implementations were highly recommended to implement.</para>
-            <para>EXT extensions are a class between the two. They are not proprietary extensions,
-                and in many cases were created through collaboration among ARB members. Yet at the
-                same time, they are not <quote>blessed</quote> by the ARB. Historically, EXT
-                extensions have been used as test beds for functionality and APIs, to ensure that
-                the API is reasonable before promoting the feature to OpenGL core or to an ARB
-                extension.</para>
-            <para>The <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal> enumerator is part of the
-                EXT_texture_filter_anisotropic extension. Since it is an extension rather than core
-                functionality, it is usually necessary for the user to detect if the extension is
-                available and only use it if it was. If you look through the tutorial code, you will
-                find no code that does this test.</para>
-            <para>The reason for that is simply a lack of need. The extension itself dates back to
-                the GeForce 256 (not the GeForce 250GT; the original GeForce), way back in 1999.
-                Virtually all GPUs since then have implemented anisotropic filtering and exposed it
-                through this extension. That is why the tutorial does not bother to check for the
-                presence of this extension; if your hardware can run these tutorials, then it
-                exposes the extension.</para>
-            <para>If it is so ubiquitous, why has the ARB not adopted the functionality into core
-                OpenGL? Why must anisotropic filtering be an extension that is de facto guaranteed
-                but not fully part of OpenGL? This is because OpenGL must be Open.</para>
-            <para>The <quote>Open</quote> in OpenGL refers to the availability of the specification,
-                but also to the ability for anyone to implement it. As it turns out, anisotropic
-                filtering has intellectual property issues with it. If it were adopted into the
-                core, then core OpenGL would not be able to be implemented without licensing the
-                technology from the holder of the IP. It is not a proprietary extension because none
-                of the ARB members have the IP; it is held by a third party.</para>
-            <para>Therefore, you may assume that anisotropic filtering is available through OpenGL.
-                But it is technically an extension.</para>
-        </section>
-
-    </section>
-    <section>
-        <title>How Mipmap Selection Works</title>
-        <?dbhtml filename="Tut15 How Mipmapping Works.html" ?>
-        <para>Previously, we discussed mipmap selection and interpolation in terms related to the
-            geometry of the object. That is true, but only when we are dealing with simple texture
-            mapping schemes, such as when the texture coordinates are attached directly to vertex
-            positions. But as we saw in our first tutorial on texturing, texture coordinates can be
-            entirely arbitrary. So how does mipmap selection and anisotropic filtering work
-            then?</para>
-        <para>Very carefully.</para>
-        <para>Imagine a 2x2 pixel area of the screen. Now imagine that four fragment shaders, all
-            from the same triangle, are executing for that screen area. Since the fragment shaders
-            are all guaranteed to have the same uniforms and the same code, the only thing that is
-            different is the fragment inputs. And because they are executing the same code, you can
-            conceive of them executing in lockstep. That is, each of them executes the same
-            instruction, on their individual dataset, at the same time.</para>
-        <para>Under that assumption, for any particular value in a fragment shader, you can pick the
-            corresponding 3 other values in the other fragment shaders executing alongside it. If
-            that value is based solely on uniform or constant data, then each shader will have the
-            same value. But if it is based in part on input values, then each shader may have a
-            different value, based on how it was computed and the inputs.</para>
-        <para>So, let's look at the texture coordinate value; the particular value used to access
-            the texture. Each shader has one. If that value is associated with the position of the
-            object, via perspective-correct interpolation and so forth, then the
-                <emphasis>difference</emphasis> between the shaders' values will represent the
-            window space geometry of the triangle. There are two dimensions for a difference, and
-            therefore there are two differences: the difference in the window space X axis, and the
-            window space Y axis.</para>
-        <para>These two differences, sometimes called gradients or derivatives, are how mipmapping
-            actually works. If the texture coordinate used is just an input value, which itself is
-            directly associated with a position, then the gradients represent the geometry of the
-            triangle in window space. If the texture coordinate is computed in more unconventional
-            ways, it still works, as the gradients represent how the texture coordinates are
-            changing across the surface of the triangle.</para>
-        <para>Having two gradients allows for the detection of anisotropy. And therefore, it
-            provides enough information to reasonably apply anisotropic filtering algorithms.</para>
-        <para>Now, you may notice that this process is very conditional. Specifically, it requires
-            that you have 4 fragment shaders all running in lock-step. There are two circumstances
-            where that might not happen.</para>
-        <para>The most obvious is on the edge of a triangle, where a 2x2 block of neighboring
-            fragments is not possible without being outside of the fragment. This case is actually
-            trivially covered by GPUs. No matter what, the GPU will rasterize each triangle in 2x2
-            blocks. Even if some of those blocks are not actually part of the triangle of interest,
-            they will still get fragment shader time. This may seem inefficient, but it's reasonable
-            enough in cases where triangles are not incredibly tiny or thin, which is quite often.
-            The results produced by fragment shaders outside of the triangle are discarded.</para>
-        <para>The other circumstance is through deliberate user intervention. Each fragment shader
-            running in lockstep has the same uniforms but different inputs. Since they have
-            different inputs, it is possible for them to execute a conditional branch based on these
-            inputs (an if-statement or other conditional). This could cause, for example, the
-            left-half of the 2x2 quad to execute certain code, while the other half executes
-            different code. The 4 fragment shaders are no longer in lock-step. How does the GPU
-            handle it?</para>
-        <para>Well... it doesn't. Dealing with this requires manual user intervention, and it is a
-            topic we will discuss later. Suffice it to say, it screws everything up.</para>
-    </section>
-    <section>
-        <?dbhtml filename="Tut15 Performace.html" ?>
-        <title>Performance</title>
-        <para>Mipmapping has some unexpected performance characteristics. A texture with a full
-            mipmap pyramid will take up ~33% more space than just the base level. So there is some
-            memory overhead. The unexpected part is that this is actually a memory vs. performance
-            tradeoff, as mipmapping improves performance.</para>
-        <para>If a texture is going to be minified significantly, providing mipmaps is a performance
-            benefit. The reason is this: for a minified texture, the texture accesses for adjacent
-            fragment shaders will be very far apart. Texture sampling units like texture access
-            patterns where there is a high degree of locality, where adjacent fragment shaders
-            access texels that are very near one another. The farther apart they are, the less
-            useful the optimizations in the texture samplers are. Indeed, if they are far enough
-            apart, those optimizations start becoming performance penalties.</para>
-        <para>Textures that are used as lookup tables should generally not use mipmaps. But other
-            kinds of textures, like those that provide surface details, can and should where
-            reasonable.</para>
-        <para>While mipmapping is free, mipmap filtering,
-            <literal>GL_LINEAR_MIPMAP_LINEAR</literal>, is generally not free. But the cost of it is
-            rather small these days. For those textures where mipmap interpolation makes sense, it
-            should be used.</para>
-        <para>Anisotropic filtering is even more costly, as one might expect. After all, it means
-            taking more texture samples to cover a particular texture area. However, anisotropic
-            filtering is almost always implemented adaptively. This means that it will only take
-            extra samples for fragments where it detects that this is necessary. And it will only
-            take enough samples to fill out the area, up to the maximum the user provides of course.
-            Therefore, turning on anisotropic filtering, even just 2x or 4x, only hurts for the
-            fragments that need it.</para>
-    </section>
-    
-    <section>
-        <?dbhtml filename="Tut15 In Review.html" ?>
-        <title>In Review</title>
-        <para>In this tutorial, you have learned the following:</para>
-        <itemizedlist>
-            <listitem>
-                <para>Visual artifacts can appear on objects that have textures mapped to them due
-                    to the discrete nature of textures. These artifacts are most pronounced when the
-                    texture's apparent size is larger than its actual size or smaller.</para>
-            </listitem>
-            <listitem>
-                <para>Filtering techniques can reduce these artifacts, transforming visual popping
-                    into something more visually palatable. This is most easily done for texture
-                    magnification.</para>
-            </listitem>
-            <listitem>
-                <para>Mipmaps are reduced size versions of images. The purpose behind them is to act
-                    as pre-filtered versions of images, so that texture sampling hardware can
-                    effectively sample and filter lots of texels all at once. The downside is that
-                    it can appear to over-filter textures, causing them to blend down to lower
-                    mipmaps in areas where detail could be retained.</para>
-            </listitem>
-            <listitem>
-                <para>Filtering can be applied between mipmapping. Mipmap filtering can produce
-                    quite reasonable results with a relatively negligible performance
-                    penalty.</para>
-            </listitem>
-            <listitem>
-                <para>Anisotropic filtering attempts to rectify the over-filtering problems with
-                    mipmapping by filtering based on the coverage area of the texture access.
-                    Anisotropic filtering is controlled with a maximum value, which represents the
-                    maximum number of additional samples the texture access will use to compose the
-                    final color.</para>
-            </listitem>
-        </itemizedlist>
-        <section>
-            <title>Further Study</title>
-            <para>Try doing these things with the given programs.</para>
-            <itemizedlist>
-                <listitem>
-                    <para>Use non-mipmap filtering with anisotropic filtering and compare the
-                        results with the mipmap-based anisotropic version.</para>
-                </listitem>
-                <listitem>
-                    <para>Change the GL_TEXTURE_MAX_LEVEL of the checkerboard texture. Subtract 3
-                        from the computed max level. This will prevent OpenGL from accessing the
-                        bottom 3 mipmaps: 1x1, 2x2, and 4x4. See what happens. Notice how there is
-                        less grey in the distance, but some of the shimmering from our non-mipmapped
-                        version has returned.</para>
-                </listitem>
-                <listitem>
-                    <para>Go back to <phrase role="propername">Basic Texture</phrase> in the
-                        previous tutorial and modify the sampler to use linear mag and min filtering
-                        on the 1D texture. See if the linear filtering makes some of the lower
-                        resolution versions of the table more palatable. If you were to try this
-                        with the 2D lookup texture in <phrase role="propername">Material
-                            Texture</phrase> tutorial, it would cause filtering in both the S and T
-                        coordinates. This would mean that it would filter across the shininess of
-                        the table as well. Try this and see how this affects the results. Also try
-                        using linear filtering on the shininess texture.</para>
-                </listitem>
-            </itemizedlist>
-        </section>
-        
-    </section>
-    <section>
-        <?dbhtml filename="Tut15 Glossary.html" ?>
-        <title>Glossary</title>
-        <glosslist>
-            <glossentry>
-                <glossterm>texture filtering</glossterm>
-                <glossdef>
-                    <para>The process of fetching the value of a texture at a particular texture
-                        coordinate, potentially involving combining multiple texel values
-                        together.</para>
-                    <para>Filtering can happen in two directions: magnification and minification.
-                        Magnification happens when the fragment area projected into a texture is
-                        smaller than the texel itself. Minification happens when the fragment area
-                        projection is larger than a texel.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>nearest filtering</glossterm>
-                <glossdef>
-                    <para>Texture filtering where the texel closest to the texture coordinate is the
-                        value returned.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>linear filtering</glossterm>
-                <glossdef>
-                    <para>Texture filtering where the closest texel values in each dimension of the
-                        texture are access and linearly interpolated, based on how close the texture
-                        coordinate was to those values. For 1D textures, this picks two values and
-                        interpolates. For 2D textures, it picks four; for 3D textures, it selects
-                        8.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>mipmap, mipmap level</glossterm>
-                <glossdef>
-                    <para>Subimages of a texture. Each subsequence mipmap of a texture is half the
-                        size, rounded down, of the previous image. The largest mipmap is the base
-                        level. Many texture types can have mipmaps, but some cannot.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>mipmap filtering</glossterm>
-                <glossdef>
-                    <para>Texture filtering that uses mipmaps. The mipmap choosen when mipmap
-                        filtering is used is based on the angle of the texture coordinate, relative
-                        to the screen.</para>
-                    <para>Mipmap filtering can be nearest or linear. Nearest mipmap filtering picks
-                        a single mipmap and returns the value pulled from that mipmap. Linear mipmap
-                        filtering pics samples from the two nearest mipmaps and linearly
-                        interpolates between them. The sample returned in either case can have
-                        linear or nearest filtering applied within that mipmap.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>anisotropic filtering</glossterm>
-                <glossdef>
-                    <para>Texture filtering that takes into account the anisotropy of the texture
-                        access. This requires taking multiple samples from a surface that covers an
-                        irregular area of the surface. This works better with mipmap
-                        filtering.</para>
-                </glossdef>
-            </glossentry>
-            <glossentry>
-                <glossterm>OpenGL extension</glossterm>
-                <glossdef>
-                    <para>Functionality that is not part of OpenGL proper, but can be conditionally
-                        exposed by different implementations of OpenGL.</para>
-                </glossdef>
-            </glossentry>
-        </glosslist>
-        
-    </section>
-</chapter>
+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+    <?dbhtml filename="Tutorial 15.html" ?>
+    <title>Many Images</title>
+    <para>In the last tutorial, we looked at textures that were not pictures. Now, we will look at
+        textures that are pictures. However, unlike the last tutorial, where the textures
+        represented some parameter in the light equation, here, we will just be directly outputting
+        the values read from the texture.</para>
+    <sidebar>
+        <title>Graphics Fudging</title>
+        <para>Before we begin however, there is something you may need to do. When you installed
+            your graphics drivers, installed along with it was an application that allows you to
+            provide settings for your graphics driver. This affects how graphics applications render
+            and so forth.</para>
+        <para>Thus far, most of those settings have been irrelevant to us because everything we have
+            done has been entirely in our control. The OpenGL specification defined almost exactly
+            what could and could not happen, and outside of actual driver bugs, the results we
+            produced are reproducible and nearly identical across hardware.</para>
+        <para>That is no longer the case, as of this tutorial.</para>
+        <para>Texturing has long been a place where graphics drivers have been given room to play
+            and fudge results. The OpenGL specification plays fast-and-loose with certain aspects of
+            texturing. And with the driving need for graphics card makers to have high performance
+            and high image quality, graphics driver writers can, at the behest of the user, simply
+            ignore the OpenGL spec with regard to certain aspects of texturing.</para>
+        <para>The image quality settings in your graphics driver provide control over this. They are
+            ways for you to tell graphics drivers to ignore whatever the application thinks it
+            should do and instead do things their way. That is fine for a game, but right now, we
+            are learning how things work. If the driver starts pretending that we set some parameter
+            that we clearly did not, it will taint our results and make it difficult to know what
+            parameters cause what effects.</para>
+        <para>Therefore, you will need to go into your graphics driver application and change all of
+            those setting to the value that means to do what the application says. Otherwise, the
+            visual results you get for the following code may be very different from the given
+            images. This includes settings for antialiasing.</para>
+    </sidebar>
+    <section>
+        <?dbhtml filename="Tut15 Playing Checkers.html" ?>
+        <title>Playing Checkers</title>
+        <para>We will start by drawing a single large, flat plane. The plane will have a texture of
+            a checkerboard drawn on it. The camera will hover above the plane, looking out at the
+            horizon as if the plane were the ground. This is implemented in the <phrase
+                role="propername">Many Images</phrase> tutorial project.</para>
+        <!--TODO: Picture of the Many Images tutorial, basic view.-->
+        <para>The camera is automatically controlled, though it's motion can be paused with the
+                <keycap>P</keycap> key. The other functions of the tutorial will be explained as we
+            get to them.</para>
+        <para>If you look at the <filename>BigPlane.xml</filename> file, you will find that the
+            texture coordinates are well outside of the [0, 1] range we are used to. They span from
+            [-64, 64] now, but the texture itself is only valid within the [0, 1] range.</para>
+        <para>Recall from the last tutorial that the sampler object has a parameter that controls
+            what texture coordinates outside of the [0, 1] range mean. This tutorial uses many
+            samplers, but all of our samplers use the same S and T wrap modes:</para>
+        <programlisting>glSamplerParameteri(g_samplers[samplerIx], GL_TEXTURE_WRAP_S, GL_REPEAT);
+glSamplerParameteri(g_samplers[samplerIx], GL_TEXTURE_WRAP_T, GL_REPEAT);</programlisting>
+        <para>We set the S and T wrap modes to <literal>GL_REPEAT</literal>. This means that values
+            outside of the [0, 1] range wrap around to values within the range. So a texture
+            coordinate of 1.1 becomes 0.1, and a texture coordinate of -0.1 becomes 0.9. The idea is
+            to make it as though the texture were infinitely large, with infinitely many copies
+            repeating over and over.</para>
+        <note>
+            <para>It is perfectly legitimate to set the texture coordinate wrapping modes
+                differently for different coordinates. Well, usually; this does not work for certain
+                texture types, but only because they take texture coordinates with special meanings.
+                For them, the wrap modes are ignored entirely.</para>
+        </note>
+        <para>You may toggle between two meshes with the <keycap>Y</keycap> key. The alternative
+            mesh is a long, square corridor.</para>
+        <para>The shaders used here are very simple. The vertex shader takes positions and texture
+            coordinates as inputs and outputs the texture coordinate directly. The fragment shader
+            takes the texture coordinate, fetches a texture with it, and writes that color value as
+            output. Not even gamma correction is used.</para>
+        <para>The texture in question is 128x128 in size, with 4 alternating black and white squares
+            on each side. Each of the black or white squares is 32 pixels across.</para>
+    </section>
+    <section>
+        <?dbhtml filename="Tut15 Magnification.html" ?>
+        <title>Linear Filtering</title>
+        <para>While this example certainly draws a checkerboard, you can see that there are some
+            visual issues. We will start finding solutions to this with the least obvious glitches
+            first.</para>
+        <para>Take a look at one of the squares at the very bottom of the screen. Notice how the
+            line looks jagged as it moves to the left and right. You can see the pixels of it sort
+            of crawl up and down as it shifts around on the plane.</para>
+        <!--TODO: Picture of one of the checker squares at the bottom (zoomed), with nearest filtering.-->
+        <para>This is caused by the discrete nature of our texture accessing. The texture
+            coordinates are all in floating-point values. The GLSL <function>texture</function>
+            function internally converts these texture coordinates to specific texel values within
+            the texture. So what value do you get if the texture coordinate lands halfway between
+            two texels?</para>
+        <para>That is governed by a process called <glossterm>texture filtering</glossterm>.
+            Filtering can happen in two directions: magnification and minification. Magnification
+            happens when the texture mapping makes the texture appear bigger in screen space than
+            its actual resolution. If you get closer to the texture, relative to its mapping, then
+            the texture is magnified relative to its natural resolution. Minification is the
+            opposite: when the texture is being shrunken relative to its natural resolution.</para>
+        <para>In OpenGL, magnification and minification filtering are each set independently. That
+            is what the <literal>GL_TEXTURE_MAG_FILTER</literal> and
+                <literal>GL_TEXTURE_MIN_FILTER</literal> sampler parameters control. We are
+            currently using <literal>GL_NEAREST</literal> for both; this is called
+                <glossterm>nearest filtering</glossterm>. This mode means that each texture
+            coordinate picks the texel value that it is nearest to. For our checkerboard, that means
+            that we will get either black or white.</para>
+        <para>Now this may sound fine, since our texture is a checkerboard and only has two actual
+            colors. However, it is exactly this discrete sampling that gives rise to the pixel crawl
+            effect. A texture coordinate that is half-way between the white and the black is either
+            white or black; a small change in the camera causes an instant pop from black to white
+            or vice-versa.</para>
+        <para>Each fragment being rendered takes up a certain area of space on the screen: the area
+            of the destination pixel for that fragment. The texture mapping of the rendered surface
+            to the texture gives a texture coordinate for each point on the surface. But a pixel is
+            not a single, infinitely small point on the surface; it represents some finite area of
+            the surface.</para>
+        <para>Therefore, we can use the texture mapping in reverse. We can take the four corners of
+            a pixel area and find the texture coordinates from them. The area of this 4-sided
+            figure, in the space of the texture, is the area of the texture that is being mapped to
+            that location on the screen. With a perfect texture accessing system, the value we get
+            from the GLSL <function>texture</function> function would be the average value of the
+            colors in that area.</para>
+        <figure>
+            <title>Nearest Sampling</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="NearestSampleDiag.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>The dot represents the texture coordinate's location on the texture. The square is the
+            area that the fragment covers. The problem happens because a fragment area mapped into
+            the texture's space may cover some white area and some black area. Since nearest only
+            picks a single texel, which is either black or white, it does not accurately represent
+            the mapped area of the fragment.</para>
+        <para>One obvious way to smooth out the differences is to do exactly that. Instead of
+            picking a single sample for each texture coordinate, pick the nearest 4 samples and then
+            interpolate the values based on how close they each are to the texture coordinate. To do
+            this, we set the magnification and minification filters to
+            <literal>GL_LINEAR</literal>.</para>
+        <programlisting>glSamplerParameteri(g_samplers[1], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+glSamplerParameteri(g_samplers[1], GL_TEXTURE_MIN_FILTER, GL_LINEAR);</programlisting>
+        <para>This is called, surprisingly enough, <glossterm>linear filtering</glossterm>. In our
+            tutorial, press the <keycap>2</keycap> key to see what linear filtering looks like;
+            press <keycap>1</keycap> to go back to nearest sampling.</para>
+        <!--TODO: Picture of linear filtering.-->
+        <para>That looks much better for the squares close to the camera. It creates a bit of
+            fuzziness, but this is generally a lot easier for the viewer to tolerate than pixel
+            crawl. Human vision tends to be attracted to movement, and false movement like dot crawl
+            can be distracting.</para>
+    </section>
+    <section>
+        <?dbhtml filename="Tut15 Needs More Pictures.html" ?>
+        <title>Needs More Pictures</title>
+        <para>Speaking of distracting, let's talk about what is going on in the distance. When the
+            camera moves, the more distant parts of the texture look like a jumbled mess. Even when
+            the camera motion is paused, it still doesn't look like a checkerboard.</para>
+        <para>What is going on there is really simple. The way our filtering works is that, for a
+            given texture coordinate, we take either the nearest texel value, or the nearest 4
+            texels and interpolate. The problem is that, for distant areas of our surface, the
+            texture space area covered by our fragment is much larger than 4 texels across.</para>
+        <figure>
+            <title>Large Minification Sampling</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="LargeMinificDiag.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>The inner square represents the nearest texels, while the outer square represents the
+            entire fragment mapped area. We can see that the value we get with nearest sampling will
+            be pure white, since the four nearest values are white. But the value we should get
+            based on the covered area is some shade of gray.</para>
+        <para>In order to accurately represent this area of the texture, we would need to sample
+            from more than just 4 texels. The GPU is certainly capable of detecting the fragment
+            area and sampling enough values from the texture to be representative. But this would be
+            exceedingly expensive, both in terms of texture bandwidth and computation.</para>
+        <para>What if, instead of having to sample more texels, we had a number of smaller versions
+            of our texture? The smaller versions effectively pre-compute groups of texels. That way,
+            we could just sample 4 texels from a texture that is close enough to the size of our
+            fragment area.</para>
+        <figure>
+            <title>Mipmapped Minification Sampling</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="MipmapDiagram.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>These smaller versions of an image are called <glossterm>mipmaps</glossterm>; they are
+            also sometimes called mipmap levels. Previously, it was said that textures can store
+            multiple images. The additional images, for many texture types, are mipmaps. By
+            performing linear sampling against a lower mipmap level, we get a gray value that, while
+            not the exact color the coverage area suggests, is much closer to what we should get
+            than linear filtering on the large mipmap.</para>
+        <para>In OpenGL, mipmaps are numbered starting from 0. The 0 image is the largest mipmap,
+            what is usually considered the main texture image. When people speak of a texture having
+            a certain size, they mean the resolution of mipmap level 0. Each mipmap is half as small
+            as the previous one. So if our main image, mipmap level 0, has a size of 128x128, the
+            next mipmap, level 1, is 64x64. The next is 32x32. And so forth, down to 1x1 for the
+            smallest mipmap.</para>
+        <para>For textures that are not square (which as we saw in the previous tutorial, is
+            perfectly legitimate), the mipmap chain keeps going until all dimensions are 1. So a
+            texture who's size is 128x16 (remember: the texture's size is the size of the largest
+            mipmap) would have just as many mipmap levels as a 128x128 texture. The mipmap level 4
+            of the 128x16 texture would be 8x1; the next mipmap would be 4x1.</para>
+        <note>
+            <para>It is also perfectly legal to have texture sizes that are not powers of two. For
+                them, mipmap sizes are always rounded down. So a 129x129 texture's mipmap 1 will be 64x64.
+                A 131x131 texture's mipmap 1 will be 65x65, and mipmap 2 will be 32x32.</para>
+        </note>
+        <para>The DDS image format is one of the few image formats that actually supports storing
+            all of the mipmaps for a texture in the same file. Most image formats only allow one
+            image in a single file. The texture loading code for our 128x128 texture with mipmaps is
+            as follows:</para>
+        <example>
+            <title>DDS Texture Loading with Mipmaps</title>
+            <programlisting language="cpp">std::string filename(LOCAL_FILE_DIR);
+filename += "checker.dds";
+
+std::auto_ptr&lt;glimg::ImageSet> pImageSet(glimg::loaders::dds::LoadFromFile(filename.c_str()));
+
+glGenTextures(1, &amp;g_checkerTexture);
+glBindTexture(GL_TEXTURE_2D, g_checkerTexture);
+
+for(int mipmapLevel = 0; mipmapLevel &lt; pImageSet->GetMipmapCount(); mipmapLevel++)
+{
+    std::auto_ptr&lt;glimg::SingleImage> pImage(pImageSet->GetImage(mipmapLevel, 0, 0));
+    glimg::Dimensions dims = pImage->GetDimensions();
+    
+    glTexImage2D(GL_TEXTURE_2D, mipmapLevel, GL_RGB8, dims.width, dims.height, 0,
+        GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pImage->GetImageData());
+}
+
+glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
+glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, pImageSet->GetMipmapCount() - 1);
+glBindTexture(GL_TEXTURE_2D, 0);</programlisting>
+        </example>
+        <para>Because the file contains multiple mipmaps, we must load each one in turn. The GL
+            Image library considers each mipmap to be its own image. The
+                <function>GetDimensions</function> member of
+                <classname>glimg::SingleImage</classname> returns the size of the particular
+            mipmap.</para>
+        <para>The <function>glTexImage2D</function> function takes a mipmap level as the second
+            parameter. The width and height parameters represent the size of the mipmap in question,
+            not the size of the base level.</para>
+        <para>Notice that the last statements have changed. The
+                <literal>GL_TEXTURE_BASE_LEVEL</literal> and <literal>GL_TEXTURE_MAX_LEVEL</literal>
+            parameters tell OpenGL what mipmaps in our texture can be used. This represents a closed
+            range. Since a 128x128 texture has 8 mipmaps, we use the range [0, 7]. The base level of
+            a texture is the largest usable mipmap level, while the max level is the smallest usable
+            level. It is possible to omit some of the smaller mipmap levels. Note that level 0 is
+            always the largest possible mipmap level.</para>
+        <para/>
+        <para>Filtering based on mipmaps is unsurprisingly named <glossterm>mipmap
+                filtering</glossterm>. This tutorial does not load two checkerboard textures; it
+            only ever uses one checkerboard. The reason mipmaps have not been used until now is
+            because mipmap filtering was not activated. Setting the base and max level is not
+            enough; the sampler object must be told to use mipmap filtering. If it does not, then it
+            will simply use the base level.</para>
+        <para>Mipmap filtering only works for minification, since minification represents a fragment
+            area that is larger than the texture's resolution. To activate this, we use a special
+                <literal>MIPMAP</literal> mode of minification filtering.</para>
+        <programlisting>glSamplerParameteri(g_samplers[2], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+glSamplerParameteri(g_samplers[2], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);</programlisting>
+        <para>The <literal>GL_LINEAR_MIPMAP_NEAREST</literal> minification filter means the
+            following. For a particular call to the GLSL <function>texture</function> function, it
+            will detect which mipmap is the one that is closest to our fragment area. This detection
+            is based on the angle of the surface relative to the camera's view<footnote>
+                <para>This is a simplification; a more thorough discussion is forthcoming.</para>
+            </footnote>. Then, when it samples from that mipmap, it will use linear filtering of the
+            four nearest samples within that mipmap.</para>
+        <para>If you press the <keycap>3</keycap> key in the tutorial, you can see the effects of
+            this filtering mode.</para>
+        <!--TODO: Picture of LINEAR_MIPMAP_NEAREST, #3. Hallway.-->
+        <para>That's a lot more reasonable. It isn't perfect, but it is much better than the random
+            motion in the distance that we have previously seen.</para>
+        <para>It can be difficult to truly understand the effects of mipmap filtering when using
+            normal textures and mipmaps. Therefore, if you press the <keycap>Spacebar</keycap>, the
+            tutorial will switch to a special texture. It is not loaded from a file; it is instead
+            constructed at runtime.</para>
+        <para>Normally, mipmaps are simply smaller versions of larger images, using linear filtering
+            or various other algorithms to compute a reasonable scaled down result. This special
+            texture's mipmaps are all flat colors, but each mipmap has a different color. This makes
+            it much more obvious where each mipmap is.</para>
+        <!--TODO: Picture of the special texture with hallway, linear mipmap nearest.-->
+        <para>Now we can really see where the different mipmaps are.</para>
+        <section>
+            <title>Special Texture Generation</title>
+            <para>The special mipmap viewing texture is interesting, as it shows an issue you may
+                need to work with when uploading certain textures. Alignment.</para>
+            <para>The checkerboard texture, though it only stores black and white values, actually
+                has all three color channels, plus a fourth value. Since each channel is stored as
+                8-bit unsigned normalized integers, each pixel takes up 4 * 8 or 32 bits, which is 4
+                bytes.</para>
+            <para>OpenGL image uploading and downloading is based on horizontal rows of image data.
+                Each row is expected to have a certain byte alignment. The OpenGL default is 4
+                bytes; since our pixels are 4 bytes in length, every mipmap will have a line size in
+                bytes that is a multiple of 4 bytes. Even the 1x1 mipmap level is 4 bytes in
+                size.</para>
+            <para>Note that the internal format we provide is <literal>GL_RGB8</literal>, even
+                though the components we are transferring are <literal>GL_BGRA</literal> (the A
+                being the fourth component). This means that OpenGL will more or less discard the
+                fourth component we upload. That is fine.</para>
+            <para>The issue with the special texture's pixel data is that it is not 4 bytes in
+                length. The function used to generate a mipmap level of the special texture is as
+                follows:</para>
+            <example>
+                <title>Special Texture Data</title>
+                <programlisting>void FillWithColor(std::vector&lt;GLubyte> &amp;buffer,
+                   GLubyte red, GLubyte green, GLubyte blue,
+                   int width, int height)
+{
+    int numTexels = width * height;
+    buffer.resize(numTexels * 3);
+    
+    std::vector&lt;GLubyte>::iterator it = buffer.begin();
+    while(it != buffer.end())
+    {
+        *it++ = red;
+        *it++ = green;
+        *it++ = blue;
+    }
+}</programlisting>
+            </example>
+            <para>This creates a texture that has 24-bit pixels; each pixel contains 3 bytes.</para>
+            <para>That is fine for any width value that is a multiple of 4. However, if the width is
+                2, then each row of pixel data will be 6 bytes long. That is not a multiple of 4 and
+                therefore breaks alignment.</para>
+            <para>Therefore, we must change the pixel alignment that OpenGL uses. The
+                    <function>LoadMipmapTexture</function> function is what generates the special
+                texture. One of the first lines is this:</para>
+            <programlisting>GLint oldAlign = 0;
+glGetIntegerv(GL_UNPACK_ALIGNMENT, &amp;oldAlign);
+glPixelStorei(GL_UNPACK_ALIGNMENT, 1);</programlisting>
+            <para>The first two lines gets the old alignment, so that we can reset it once we are
+                finished. The last line uses <function>glPixelStorei</function>
+            </para>
+            <para>Note that the GLImg library does provide an alignment value; it is part of the
+                    <classname>Dimensions</classname> structure of an image. We have simply not used
+                it yet. In the last tutorial, our row widths were aligned to 4 bytes, so there was
+                no chance of a problem. In this tutorial, our image data is 4-bytes in pixel size,
+                so it is always intrinsically aligned to 4 bytes.</para>
+            <para>That being said, you should always keep row alignment in mind, particularly when
+                dealing with mipmaps.</para>
+        </section>
+        <section>
+            <title>Filtering Between Mipmaps</title>
+            <para>Our mipmap filtering has been a dramatic improvement over previous efforts.
+                However, it does create artifacts. One of particular concern is the change between
+                mipmap levels. It is abrupt and somewhat easy to notice for a moving scene. Perhaps
+                there is a way to smooth that out.</para>
+            <para>Our current minification filtering picks a single mipmap level and selects a
+                sample from it. It would be better if we could pick the two nearest mipmap levels
+                and blend between the values fetched from the two textures. This would give us a
+                smoother transition from one mipmap level to the next.</para>
+            <para>This is done by using <literal>GL_LINEAR_MIPMAP_LINEAR</literal> minification
+                filtering. The first <literal>LINEAR</literal> represents the filtering done within
+                a single mipmap level, and the second <literal>LINEAR</literal> represents the
+                filtering done between mipmap levels.</para>
+            <para>To see this in action, press the <keycap>4</keycap> key.</para>
+            <!--TODO: Picture of linear_mipmap_linear, on a plane, with both textures side-by-side.-->
+            <para>That is an improvement. There are still issues to work out, but it is much harder
+                to see where one mipmap ends and another begins.</para>
+            <para>OpenGL actually allows all combinations of <literal>NEAREST</literal> and
+                    <literal>LINEAR</literal> in minification filtering. Using nearest filtering
+                within levels while filtering between levels
+                    (<literal>GL_NEAREST_MIPMAP_LINEAR</literal>) is not terribly useful
+                however.</para>
+            <sidebar>
+                <title>Filtering Nomenclature</title>
+                <para>If you are familiar with texture filtering from other materials, you may have
+                    heard the terms <quote>bilinear filtering</quote> and <quote>trilinear
+                        filtering</quote> before. Indeed, you may know that linear filtering between
+                    mipmap levels is commonly called trilinear filtering.</para>
+                <para>This book does not use that terminology. And for good reason: <quote>trilinear
+                        filtering</quote> is a misnomer.</para>
+                <para>To understand the problem, it is important to understand what <quote>bilinear
+                        filtering</quote> means. The <quote>bi</quote> in bilinear comes from doing
+                    linear filtering along the two axes of a 2D texture. So there is linear
+                    filtering in the S and T directions (remember: proper OpenGL nomenclature calls
+                    the 2D texture coordinate axes S and T); since that is two directions, it is
+                    called <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes
+                    from adding a third direction of linear filtering: between mipmap levels.</para>
+                <para>Therefore, one could consider using <literal>GL_LINEAR</literal> mag and min
+                    filtering to be bilinear, and using <literal>GL_LINEAR_MIPMAP_LINEAR</literal>
+                    to be trilinear.</para>
+                <para>That's all well and good... for 2D textures. But what about for 1D textures?
+                    Since 1D textures are one dimensional, <literal>GL_LINEAR</literal> mag and min
+                    filtering only filters in one direction: S. Therefore, it would be reasonable to
+                    call 1D <literal>GL_LINEAR</literal> filtering simply <quote>linear
+                        filtering.</quote> Indeed, filtering between mipmap levels of 1D textures
+                    (yes, 1D textures can have mipmaps) would have to be called <quote>bilinear
+                        filtering.</quote></para>
+                <para>And then there are 3D textures. <literal>GL_LINEAR</literal> mag and min
+                    filtering filters in all 3 directions: S, T, and R. Therefore, that would have
+                    to be called <quote>trilinear filtering.</quote> And if you add linear mipmap
+                    filtering on top of that (yes, 3D textures can have mipmaps), it would be
+                        <quote>quadrilinear filtering.</quote></para>
+                <para>Therefore, the term <quote>trilinear filtering</quote> means absolutely
+                    nothing without knowing what the texture's type is. Whereas
+                        <literal>GL_LINEAR_MIPMAP_LINEAR</literal> always has a well-defined meaning
+                    regardless of the texture's type.</para>
+                <para>Unlike geometry shaders, which ought to have been called primitive shaders,
+                    OpenGL does not enshrine this misnomer into its API. There is no
+                        <literal>GL_TRILINEAR_FILTERING</literal> enum. Therefore, in this book, we
+                    can and will use the proper terms for these.</para>
+            </sidebar>
+        </section>
+    </section>
+    <section>
+        <?dbhtml filename="Tut15 Anisotropy.html" ?>
+        <title>Anisotropy</title>
+        <para>Linear mipmap filtering is good; it eliminates most of the fluttering and oddities in
+            the distance. The problem is that it replaces a lot of that fluttering with... grey.
+            Mipmap-based filtering works reasonably well, but it tends to over-compensate.</para>
+        <para>For example, take the diagonal chain of squares at the left or right of the screen.
+            Expand the window horizontally if you need to.</para>
+        <!--TODO: Picture of the linear_mipmap_linear with diagonal in view.-->
+        <para>Pixels that are along this diagonal should be mostly black. As they get farther and
+            farther away, the fragment area becomes more and more distorted length-wise, relative to
+            the texel area:</para>
+        <figure>
+            <title>Long Fragment Area</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="DiagonalDiagram.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>With perfect filtering, we should get a value that is mostly black. But instead, we
+            get a much lighter shade of grey. The reason has to do with the specifics of mipmapping
+            and mipmap selection.</para>
+        <para>Mipmaps are pre-filtered versions of the main texture. The problem is that they are
+            filtered in both directions equally. This is fine if the fragment area is square, but
+            for oblong shapes, mipmap selection becomes more problematic. The particular algorithm
+            used is very conservative. It selects the smallest mipmap level possible for the
+            fragment area. So long, thin areas, in terms of the values fetched by the texture
+            function, will be no different from a square area.</para>
+        <figure>
+            <title>Long Fragment with Sample Area</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="MipmapDiagonalDiagram.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>The large square represents the effective filtering box, while the smaller area is the
+            one that we are actually sampling from. Mipmap filtering can often combine texel values
+            from outside of the sample area, and in this particularly degenerate case, it pulls in
+            texel values from very far outside of the sample area.</para>
+        <para>This happens when the filter box is not a square. A square filter box is said to be
+            isotropic: uniform in all directions. Therefore, a non-square filter box is anisotropic.
+            Filtering that takes into account the anisotropic nature of a particular filter box is
+            naturally called <glossterm>anisotropic filtering.</glossterm></para>
+        <para>The OpenGL specification is usually very particular about most things. It explains the
+            details of which mipmap is selected as well as how closeness is defined for linear
+            interpolation between mipmaps. But for anisotropic filtering, the specification is very
+            loose as to exactly how it works.</para>
+        <para>The general idea is this. The implementation will take some number of samples that
+            approximates the shape of the filter box in the texture. It will select from mipmaps,
+            but only when those mipmaps represent a closer filtered version of area being sampled.
+            Here is an example:</para>
+        <figure>
+            <title>Parallelogram Sample Area</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="ParallelogramDiag.svg"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>Some of the samples that are entirely within the sample area can use smaller mipmaps
+            to reduce the number of samples actually taken. The above image only needs four samples
+            to approximate the sample area: the three small boxes, and the larger box in the
+            center.</para>
+        <para>All of the sample values will be averaged together based on a weighting algorithm that
+            best represents that sample's contribution to the filter box. Again, this is all very
+            generally; the specific algorithms are implementation dependent.</para>
+        <para>Run the tutorial again. The <keycap>5</keycap> key turns activates a form of
+            anisotropic filtering.</para>
+        <!--TODO: Picture of anisotropic with plane.-->
+        <para>That's an improvement.</para>
+        <section>
+            <title>Sample Control</title>
+            <para>Anisotropic filtering requires taking multiple samples from the various mipmaps.
+                The control on the quality of anisotropic filtering is in limiting the number of
+                samples used. Raising the maximum number of samples taken will generally make the
+                result look better, but it will also decrease performance.</para>
+            <para>This is done by setting the <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal>
+                sampler parameter:</para>
+            <programlisting language="cpp">glSamplerParameteri(g_samplers[4], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+glSamplerParameteri(g_samplers[4], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
+glSamplerParameterf(g_samplers[4], GL_TEXTURE_MAX_ANISOTROPY_EXT, 4.0f);</programlisting>
+            <para>This represents the maximum number of samples that will be taken for any texture
+                accesses through this sampler. Note that we still use linear mipmap filtering in
+                combination with anisotropic filtering. While you could theoretically use
+                anisotropic filtering without mipmaps, you will get much better performance if you
+                use it in tandem with linear mipmap filtering.</para>
+            <para>The max anisotropy is a floating point value, in part because the specific nature
+                of anisotropic filtering is left up to the hardware. But in general, you can treat
+                it like an integer value.</para>
+            <para>There is a limit to the maximum anisotropy that we can provide. This limit is
+                implementation defined; it can be queried with <function>glGetFloatv</function>,
+                since the value is a float rather than an integer. To set the max anisotropy to the
+                maximum possible value, we do this.</para>
+            <programlisting language="cpp">GLfloat maxAniso = 0.0f;
+glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &amp;maxAniso);
+
+glSamplerParameteri(g_samplers[5], GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+glSamplerParameteri(g_samplers[5], GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
+glSamplerParameterf(g_samplers[5], GL_TEXTURE_MAX_ANISOTROPY_EXT, maxAniso);</programlisting>
+            <para>To see the results of this, press the <keycap>6</keycap> key.</para>
+            <!--TODO: Picture of max anisotropic with the plane.-->
+            <para>That looks pretty good now. There are still some issues out in the distance.
+                Remember that your image may not look exactly like this one, since the details of
+                anisotropic filtering are implementation specific.</para>
+            <para>You may be concerned that none of the filtering techniques, even the max
+                anisotropic one, produces perfect results. In the distance, the texture still
+                becomes a featureless grey even along the diagonal. The reason is because rendering
+                large checkerboard is perhaps one of the most difficult problems from a texture
+                filtering perspective. This becomes even worse when it is viewed edge on, as we do
+                here.</para>
+            <para>Indeed, the repeating checkerboard texture was chosen specifically because it
+                highlights the issues in a very obvious way. A more traditional diffuse color
+                texture typically looks much better with reasonable filtering applied.</para>
+        </section>
+        <section>
+            <title>A Matter of EXT</title>
+            <para>You may have noticed the <quote>EXT</quote> suffix on
+                    <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal>. This suffix means that this
+                enumerator comes from an <glossterm>OpenGL extension</glossterm>. First and
+                foremost, this means that this enumerator is not part of the OpenGL
+                Specification.</para>
+            <para>An OpenGL extension is a modification of OpenGL exposed by a particular
+                implementation. Extensions have published documents that explain how they change the
+                standard GL specification; this allows users to be able to use them correctly.
+                Because different implementations of OpenGL will implement different sets of
+                extensions, there is a mechanism for querying whether an extension is implemented.
+                This allows user code to detect the availability of certain hardware features and
+                use them or not as needed.</para>
+            <para>There are several kinds of extensions. There are proprietary extensions; these are
+                created by a particular vendor and are rarely if ever implemented by another vendor.
+                In some cases, they are based on intellectual property owned by that vendor and thus
+                cannot be implemented without explicit permission. The enums and functions for these
+                extensions end with a suffix based on the proprietor of the extension. An
+                NVIDIA-only extension would end in <quote>NV,</quote> for example.</para>
+            <para>ARB extensions are a special class of extension that is blessed by the OpenGL ARB
+                (who governs the OpenGL specification). These are typically created as a
+                collaboration between multiple members of the ARB. Historically, they have
+                represented functionality that implementations were highly recommended to
+                implement.</para>
+            <para>EXT extensions are a class between the two. They are not proprietary extensions,
+                and in many cases were created through collaboration among ARB members. Yet at the
+                same time, they are not <quote>blessed</quote> by the ARB. Historically, EXT
+                extensions have been used as test beds for functionality and APIs, to ensure that
+                the API is reasonable before promoting the feature to OpenGL core or to an ARB
+                extension.</para>
+            <para>The <literal>GL_TEXTURE_MAX_ANISOTROPY_EXT</literal> enumerator is part of the
+                EXT_texture_filter_anisotropic extension. Since it is an extension rather than core
+                functionality, it is usually necessary for the user to detect if the extension is
+                available and only use it if it was. If you look through the tutorial code, you will
+                find no code that does this test.</para>
+            <para>The reason for that is simply a lack of need. The extension itself dates back to
+                the GeForce 256 (not the GeForce 250GT; the original GeForce), way back in 1999.
+                Virtually all GPUs since then have implemented anisotropic filtering and exposed it
+                through this extension. That is why the tutorial does not bother to check for the
+                presence of this extension; if your hardware can run these tutorials, then it
+                exposes the extension.</para>
+            <para>If it is so ubiquitous, why has the ARB not adopted the functionality into core
+                OpenGL? Why must anisotropic filtering be an extension that is de facto guaranteed
+                but not technically part of OpenGL? This is because OpenGL must be Open.</para>
+            <para>The <quote>Open</quote> in OpenGL refers to the availability of the specification,
+                but also to the ability for anyone to implement it. As it turns out, anisotropic
+                filtering has intellectual property issues associated with it. If it were adopted
+                into the core, then core OpenGL would not be able to be implemented without
+                licensing the technology from the holder of the IP. It is not a proprietary
+                extension because none of the ARB members have the IP; it is held by a third
+                party.</para>
+            <para>Therefore, you may assume that anisotropic filtering is available through OpenGL.
+                But it is technically an extension.</para>
+        </section>
+
+    </section>
+    <section>
+        <title>How Mipmap Selection Works</title>
+        <?dbhtml filename="Tut15 How Mipmapping Works.html" ?>
+        <para>Previously, we discussed mipmap selection and interpolation in terms related to the
+            geometry of the object. That is true, but only when we are dealing with simple texture
+            mapping schemes, such as when the texture coordinates are attached directly to vertex
+            positions. But as we saw in our first tutorial on texturing, texture coordinates can be
+            entirely arbitrary. So how does mipmap selection and anisotropic filtering work
+            then?</para>
+        <para>Very carefully.</para>
+        <para>Imagine a 2x2 pixel area of the screen. Now imagine that four fragment shaders, all
+            from the same triangle, are executing for that screen area. Since the fragment shaders
+            are all guaranteed to have the same uniforms and the same code, the only thing that is
+            different is the fragment inputs. And because they are executing the same code, you can
+            conceive of them executing in lockstep. That is, each of them executes the same
+            instruction, on their individual dataset, at the same time.</para>
+        <para>Under that assumption, for any particular value in a fragment shader, you can pick the
+            corresponding 3 other values in the other fragment shaders executing alongside it. If
+            that value is based solely on uniform or constant data, then each shader will have the
+            same value. But if it is based in part on input values, then each shader may have a
+            different value, based on how it was computed and what those inputs were.</para>
+        <para>So, let's look at the texture coordinate value; the particular value used to access
+            the texture. Each shader has one. If that value is associated with the position of the
+            object, via perspective-correct interpolation and so forth, then the
+                <emphasis>difference</emphasis> between the shaders' values will represent the
+            window space geometry of the triangle. There are two dimensions for a difference, and
+            therefore there are two differences: the difference in the window space X axis, and the
+            window space Y axis.</para>
+        <para>These two differences, sometimes called gradients or derivatives, are how mipmapping
+            actually works. If the texture coordinate used is just an interpolated input value,
+            which itself is directly associated with a position, then the gradients represent the
+            geometry of the triangle in window space. If the texture coordinate is computed in more
+            unconventional ways, it still works, as the gradients represent how the texture
+            coordinates are changing across the surface of the triangle.</para>
+        <para>Having two gradients allows for the detection of anisotropy. And therefore, it
+            provides enough information to reasonably apply anisotropic filtering algorithms.</para>
+        <para>Now, you may notice that this process is very conditional. Specifically, it requires
+            that you have 4 fragment shaders all running in lock-step. There are two circumstances
+            where that might not happen.</para>
+        <para>The most obvious is on the edge of a triangle, where a 2x2 block of neighboring
+            fragments is not possible without being outside of the fragment. This case is actually
+            trivially covered by GPUs. No matter what, the GPU will rasterize each triangle in 2x2
+            blocks. Even if some of those blocks are not actually part of the triangle of interest,
+            they will still get fragment shader time. This may seem inefficient, but it's reasonable
+            enough in cases where triangles are not incredibly tiny or thin, which is quite often.
+            The results produced by fragment shaders outside of the triangle are discarded.</para>
+        <para>The other circumstance is through deliberate user intervention. Each fragment shader
+            running in lockstep has the same uniforms but different inputs. Since they have
+            different inputs, it is possible for them to execute a conditional branch based on these
+            inputs (an if-statement or other conditional). This could cause, for example, the
+            left-half of the 2x2 quad to execute certain code, while the other half executes
+            different code. The 4 fragment shaders are no longer in lock-step. How does the GPU
+            handle it?</para>
+        <para>Well... it doesn't. Dealing with this requires manual user intervention, and it is a
+            topic we will discuss later. Suffice it to say, it screws everything up.</para>
+    </section>
+    <section>
+        <?dbhtml filename="Tut15 Performace.html" ?>
+        <title>Performance</title>
+        <para>Mipmapping has some unexpected performance characteristics. A texture with a full
+            mipmap pyramid will take up ~33% more space than just the base level. So there is some
+            memory overhead. The unexpected part is that this is actually a memory vs. performance
+            tradeoff, as mipmapping usually improves performance.</para>
+        <para>If a texture is going to be minified significantly, providing mipmaps is a performance
+            benefit. The reason is this: for a highly minified texture, the texture accesses for
+            adjacent fragment shaders will be very far apart. Texture sampling units like texture
+            access patterns where there is a high degree of locality, where adjacent fragment
+            shaders access texels that are very near one another. The farther apart they are, the
+            less useful the optimizations in the texture samplers are. Indeed, if they are far
+            enough apart, those optimizations start becoming performance penalties.</para>
+        <para>Textures that are used as lookup tables should generally not use mipmaps. But other
+            kinds of textures, like those that provide surface details, can and should where
+            reasonable.</para>
+        <para>While mipmapping is free, linear mipmap filtering,
+                <literal>GL_LINEAR_MIPMAP_LINEAR</literal>, is generally not free. But the cost of
+            it is rather small these days. For those textures where mipmap interpolation makes
+            sense, it should be used.</para>
+        <para>Anisotropic filtering is even more costly, as one might expect. After all, it means
+            taking more texture samples to cover a particular texture area. However, anisotropic
+            filtering is almost always implemented adaptively. This means that it will only take
+            extra samples for fragments where it detects that this is necessary. And it will only
+            take enough samples to fill out the area, up to the maximum the user provides of course.
+            Therefore, turning on anisotropic filtering, even just 2x or 4x, only hurts for the
+            fragments that need it.</para>
+    </section>
+    
+    <section>
+        <?dbhtml filename="Tut15 In Review.html" ?>
+        <title>In Review</title>
+        <para>In this tutorial, you have learned the following:</para>
+        <itemizedlist>
+            <listitem>
+                <para>Visual artifacts can appear on objects that have textures mapped to them due
+                    to the discrete nature of textures. These artifacts are most pronounced when the
+                    texture's apparent size is larger than its actual size or smaller.</para>
+            </listitem>
+            <listitem>
+                <para>Filtering techniques can reduce these artifacts, transforming visual popping
+                    into something more visually palatable. This is most easily done for texture
+                    magnification.</para>
+            </listitem>
+            <listitem>
+                <para>Mipmaps are reduced size versions of images. The purpose behind them is to act
+                    as pre-filtered versions of images, so that texture sampling hardware can
+                    effectively sample and filter lots of texels all at once. The downside is that
+                    it can appear to over-filter textures, causing them to blend down to lower
+                    mipmaps in areas where detail could be retained.</para>
+            </listitem>
+            <listitem>
+                <para>Filtering can be applied between mipmap levels. Mipmap filtering can produce
+                    quite reasonable results with a relatively negligible performance
+                    penalty.</para>
+            </listitem>
+            <listitem>
+                <para>Anisotropic filtering attempts to rectify the over-filtering problems with
+                    mipmapping by filtering based on the coverage area of the texture access.
+                    Anisotropic filtering is controlled with a maximum value, which represents the
+                    maximum number of additional samples the texture access will use to compose the
+                    final color.</para>
+            </listitem>
+        </itemizedlist>
+        <section>
+            <title>Further Study</title>
+            <para>Try doing these things with the given programs.</para>
+            <itemizedlist>
+                <listitem>
+                    <para>Use non-mipmap filtering with anisotropic filtering and compare the
+                        results with the mipmap-based anisotropic version.</para>
+                </listitem>
+                <listitem>
+                    <para>Change the <literal>GL_TEXTURE_MAX_LEVEL</literal> of the checkerboard
+                        texture. Subtract 3 from the computed max level. This will prevent OpenGL
+                        from accessing the bottom 3 mipmaps: 1x1, 2x2, and 4x4. See what happens.
+                        Notice how there is less grey in the distance, but some of the shimmering
+                        from our non-mipmapped version has returned.</para>
+                </listitem>
+                <listitem>
+                    <para>Go back to <phrase role="propername">Basic Texture</phrase> in the
+                        previous tutorial and modify the sampler to use linear mag and min filtering
+                        on the 1D texture. See if the linear filtering makes some of the lower
+                        resolution versions of the table more palatable. If you were to try this
+                        with the 2D lookup texture in <phrase role="propername">Material
+                            Texture</phrase> tutorial, it would cause filtering in both the S and T
+                        coordinates. This would mean that it would filter across the shininess of
+                        the table as well. Try this and see how this affects the results. Also try
+                        using linear filtering on the shininess texture.</para>
+                </listitem>
+            </itemizedlist>
+        </section>
+        
+    </section>
+    <section>
+        <?dbhtml filename="Tut15 Glossary.html" ?>
+        <title>Glossary</title>
+        <glosslist>
+            <glossentry>
+                <glossterm>texture filtering</glossterm>
+                <glossdef>
+                    <para>The process of fetching the value of a texture at a particular texture
+                        coordinate, potentially involving combining multiple texel values
+                        together.</para>
+                    <para>Filtering can happen in two directions: magnification and minification.
+                        Magnification happens when the fragment area projected into a texture is
+                        smaller than the texel itself. Minification happens when the fragment area
+                        projection is larger than a texel.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>nearest filtering</glossterm>
+                <glossdef>
+                    <para>Texture filtering where the texel closest to the texture coordinate is the
+                        value returned.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>linear filtering</glossterm>
+                <glossdef>
+                    <para>Texture filtering where the closest texel values in each dimension of the
+                        texture are access and linearly interpolated, based on how close the texture
+                        coordinate was to those values. For 1D textures, this picks two values and
+                        interpolates. For 2D textures, it picks four; for 3D textures, it selects
+                        8.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>mipmap, mipmap level</glossterm>
+                <glossdef>
+                    <para>Subimages of a texture. Each subsequence mipmap of a texture is half the
+                        size, rounded down, of the previous image. The largest mipmap is the base
+                        level. Many texture types can have mipmaps, but some cannot.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>mipmap filtering</glossterm>
+                <glossdef>
+                    <para>Texture filtering that uses mipmaps. The mipmap choosen when mipmap
+                        filtering is used is based on the angle of the texture coordinate, relative
+                        to the screen.</para>
+                    <para>Mipmap filtering can be nearest or linear. Nearest mipmap filtering picks
+                        a single mipmap and returns the value pulled from that mipmap. Linear mipmap
+                        filtering pics samples from the two nearest mipmaps and linearly
+                        interpolates between them. The sample returned in either case can have
+                        linear or nearest filtering applied within that mipmap.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>anisotropic filtering</glossterm>
+                <glossdef>
+                    <para>Texture filtering that takes into account the anisotropy of the texture
+                        access. This requires taking multiple samples from a surface that covers an
+                        irregular area of the surface. This works better with mipmap
+                        filtering.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>OpenGL extension</glossterm>
+                <glossdef>
+                    <para>Functionality that is not part of OpenGL proper, but can be conditionally
+                        exposed by different implementations of OpenGL.</para>
+                </glossdef>
+            </glossentry>
+        </glosslist>
+        
+    </section>
+</chapter>

Documents/chunked.css

 body
 {
     background-color: #fff6e7;
-    padding: 0 5%;
+    padding: 0 5% 0 20%;
     font-family: calibri, helvetica, serif;
     font-size: 12pt;
 }
 
+div.toc
+{
+	position: absolute;
+	left: 0;
+	margin-left: 5%;
+	max-width: 15%;
+}
+
+div.book div.toc
+{
+	position: inherit;
+	left: inherit;
+	margin-left: inherit;
+	max-width: inherit;
+}
+
 br.example-break { display: none; }