Commits

Jason McKesson committed 19dcc55

Copyediting.

Comments (0)

Files changed (5)

Documents/Illumination/Tutorial 09.xml

                 surface really is curved, we need to do something else.</para>
             <para>Instead of using the triangle's normal, we can assign to each vertex the normal
                 that it <emphasis>would</emphasis> have had on the surface it is approximating. That
-                is, while the mesh is an approximating, the normal for a vertex is the actual normal
+                is, while the mesh is an approximation, the normal for a vertex is the actual normal
                 for that surface. This actually works out surprisingly well.</para>
             <para>This means that we must add to the vertex's information. In past tutorials, we
                 have had a position and sometimes a color. To that information, we add a normal. So
                 transforming positions, the fourth component was 1.0; this was used so that the
                 translation component of the matrix transformation would be added to each
                 position.</para>
-            <para>Vectors represent directions, not absolute positions. And while rotating or
+            <para>Normals represent directions, not absolute positions. And while rotating or
                 scaling a direction is a reasonable operation, translating it is not. Now, we could
                 just adjust the matrix to remove all translations before transforming our light into
                 camera space. But that's highly unnecessary; we can simply put 0.0 in the fourth
         <para>In the pure-diffuse case, the light intensity is full white. But in the ambient case,
             we deliberately set the diffuse intensity to less than full white. This is very
             intensional.</para>
-        <para>We will talk more about this issue in the next future, but it is very critical that
+        <para>We will talk more about this issue in the near future, but it is very critical that
             light intensity values not exceed 1.0. This includes <emphasis>combined</emphasis>
             lighting intensity values. OpenGL clamps colors that it writes to the output image to
             the range [0, 1]. So any light intensity that exceeds 1.0, whether alone or combined

Documents/Illumination/Tutorial 10.xml

             </mediaobject>
         </informalequation>
         <para>This equation computes physically realistic light attenuation for point-lights. But it
-            often does not look very good. Lights seem to have a much sharper intensity falloff than
+            often does not look very good. The equation tends to create a sharper intensity falloff than
             one would expect.</para>
         <para>There is a reason for this, but it is not one we are ready to get into quite yet. What
             is often done is to simply use the inverse rather than the inverse-square of the
                     physically accurate, but it can look reasonably good.</para>
                 <para>We simply do linear interpolation based on the distance. When the distance is
                     0, the light has full intensity. When the distance is beyond a given distance,
-                    the maximum light range (which varies per-light), the intensity is 1.</para>
+                    the maximum light range (which varies per-light), the intensity is 0.</para>
                 <para>Note that <quote>reasonably good</quote> depends on your needs. The closer you
                     get in other ways to providing physically accurate lighting, the closer you get
                     to photorealism, the less you can rely on less accurate phenomena. It does no

Documents/Illumination/Tutorial 12.xml

         <para>The biggest thing here is that we want the scene to dynamically change lighting
             levels. Specifically, we want a full day/night cycle. The sun will sink, gradually
             losing intensity until it has none. There, it will remain until the dawn of the next
-            day, where it will gain strength until full and rise again. The other lights should be
+            day, where it will gain strength and rise again. The other lights should be
             much weaker in overall intensity than the sun.</para>
         <para>One thing that this requires is a dynamic ambient lighting range. Remember that the
             ambient light is an attempt to resolve the global illumination problem: that light

Documents/Texturing/Tutorial 14.xml

                 represents a texture being accessed by our shader. How do we associate a texture
                 object with a sampler in the shader?</para>
             <para>Although the API is slightly more obfuscated due to legacy issues, this
-                association is made essentially the same was as for UBOs.</para>
+                association is made essentially the same way as UBOs.</para>
             <para>The OpenGL context has an array of slots called <glossterm>texture image
                     units</glossterm>, also known as <glossterm>image units</glossterm> or
                     <glossterm>texture units</glossterm>. Each image unit represents a single

Documents/Texturing/Tutorial 15.xml

             takes the texture coordinate, fetches a texture with it, and writes that color value as
             output. Not even gamma correction is used.</para>
         <para>The texture in question is 128x128 in size, with 4 alternating black and white squares
-            on each side.</para>
+            on each side. Each of the black or white squares is 32 pixels across.</para>
     </section>
     <section>
         <?dbhtml filename="Tut15 Magnification.html" ?>
         <title>Linear Filtering</title>
         <para>While this example certainly draws a checkerboard, you can see that there are some
-            visual issues. We will start finding solutions to the with the least obvious glitches
+            visual issues. We will start finding solutions to this with the least obvious glitches
             first.</para>
         <para>Take a look at one of the squares at the very bottom of the screen. Notice how the
             line looks jagged as it moves to the left and right. You can see the pixels of it sort
             of crawl up and down as it shifts around on the plane.</para>
+        <!--TODO: Picture of one of the checker squares at the bottom (zoomed), with nearest filtering.-->
         <para>This is caused by the discrete nature of our texture accessing. The texture
             coordinates are all in floating-point values. The GLSL <function>texture</function>
             function internally converts these texture coordinates to specific texel values within
             two texels?</para>
         <para>That is governed by a process called <glossterm>texture filtering</glossterm>.
             Filtering can happen in two directions: magnification and minification. Magnification
-            happens when the texture mapping makes the texture appear bigger than it's actual
-            resolution. If you get closer to the texture, relative to its mapping, then the texture
-            is magnified relative to its natural resolution. Minification is the opposite: when the
-            texture is being shrunken relative to its natural resolution.</para>
+            happens when the texture mapping makes the texture appear bigger in screen space than
+            it's actual resolution. If you get closer to the texture, relative to its mapping, then
+            the texture is magnified relative to its natural resolution. Minification is the
+            opposite: when the texture is being shrunken relative to its natural resolution.</para>
         <para>In OpenGL, magnification and minification filtering are each set independently. That
             is what the <literal>GL_TEXTURE_MAG_FILTER</literal> and
                 <literal>GL_TEXTURE_MIN_FILTER</literal> sampler parameters control. We are
         <para>Therefore, we can use the texture mapping in reverse. We can take the four corners of
             a pixel area and find the texture coordinates from them. The area of this 4-sided
             figure, in the space of the texture, is the area of the texture that is being mapped to
-            the screen pixel. With a perfect texture accessing system, the total color of that area
-            would be the value we get from the GLSL <function>texture</function> function.</para>
+            that location on the screen. With a perfect texture accessing system, the value we get
+            from the GLSL <function>texture</function> function would be the average value of the
+            colors in that area.</para>
         <!--TODO: Diagram of the checkerboard texture, at the pixel level with a grid.
 This should also have a region that represents the pixel area mapped from the surface.
 And it should have a point representing the texture coordinate.-->
         <!--TODO: Picture of linear filtering.-->
         <para>That looks much better for the squares close to the camera. It creates a bit of
             fuzziness, but this is generally a lot easier for the viewer to tolerate than pixel
-            crawl. Human vision tends to be attracted to movement, and false movement, like dot
-            crawl, can be distracting.</para>
+            crawl. Human vision tends to be attracted to movement, and false movement like dot crawl
+            can be distracting.</para>
     </section>
     <section>
         <?dbhtml filename="Tut15 Needs More Pictures.html" ?>
             texture space area covered by our fragment is much larger than 4 texels across.</para>
         <!--TODO: Diagram of the fragment area in texture space. There should be a texture coordinate location.-->
         <para>In order to accurately represent this area of the texture, we would need to sample
-            from more than just 4 texels. The GPU would be capable of detecting the fragment area
-            and sampling enough values from the texture to be representative. But this would be
+            from more than just 4 texels. The GPU is certainly capable of detecting the fragment
+            area and sampling enough values from the texture to be representative. But this would be
             exceedingly expensive, both in terms of texture bandwidth and computation.</para>
         <para>What if, instead of having to sample more texels, we had a number of smaller versions
             of our texture? The smaller versions effectively pre-compute groups of texels. That way,
             of the 128x16 texture would be 8x1; the next mipmap would be 4x1.</para>
         <note>
             <para>It is perfectly legal to have texture sizes that are not powers of two. For them,
-                mipmap sizes are rounded down. So a 129x129 texture's mipmap 1 will be 64x64.</para>
+                mipmap sizes are always rounded down. So a 129x129 texture's mipmap 1 will be 64x64.
+                A 131x131 texture's mipmap 1 will be 65x65, and mipmap 2 will be 32x32.</para>
         </note>
         <para>The DDS image format is one of the few image formats that actually supports storing
             all of the mipmaps for a texture in the same file. Most image formats only allow one
             parameters tell OpenGL what mipmaps in our texture can be used. This represents a closed
             range. Since a 128x128 texture has 8 mipmaps, we use the range [0, 7]. The base level of
             a texture is the largest usable mipmap level, while the max level is the smallest usable
-            level. It is possible to omit some of the smaller mipmap levels.</para>
+            level. It is possible to omit some of the smaller mipmap levels. Note that level 0 is
+            always the largest possible mipmap level.</para>
+        <para/>
         <para>Filtering based on mipmaps is unsurprisingly named <glossterm>mipmap
                 filtering</glossterm>. This tutorial does not load two checkerboard textures; it
             only ever uses one checkerboard. The reason mipmaps have not been used until now is
         <para>If you press the <keycap>3</keycap> key in the tutorial, you can see the effects of
             this filtering mode.</para>
         <!--TODO: Picture of LINEAR_MIPMAP_NEAREST, #3. Hallway.-->
-        <para>That's a lot more reasonable. It isn't perfect, but it's much better than the random
+        <para>That's a lot more reasonable. It isn't perfect, but it is much better than the random
             motion in the distance that we have previously seen.</para>
         <para>It can be difficult to truly understand the effects of mipmap filtering when using
             normal textures and mipmaps. Therefore, if you press the <keycap>Spacebar</keycap>, the
                 size.</para>
             <para>Note that the internal format we provide is <literal>GL_RGB8</literal>, even
                 though the components we are transferring are <literal>GL_BGRA</literal> (the A
-                being the fourth component). This means that OpenGL will, more or less, discard the
+                being the fourth component). This means that OpenGL will more or less discard the
                 fourth component we upload. That is fine.</para>
             <para>The issue with the special texture's pixel data is that it is not 4 bytes in
                 length. The function used to generate a mipmap level of the special texture is as
                         filtering</quote> means. The <quote>bi</quote> in bilinear comes from doing
                     linear filtering along the two axes of a 2D texture. So there is linear
                     filtering in the S and T directions (remember: proper OpenGL nomenclature calls
-                    the texture coordinate axes S and T); since that is two directions, it is called
-                        <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes from
-                    adding a third direction of linear filtering: between mipmap levels.</para>
+                    the 2D texture coordinate axes S and T); since that is two directions, it is
+                    called <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes
+                    from adding a third direction of linear filtering: between mipmap levels.</para>
                 <para>Therefore, one could consider using <literal>GL_LINEAR</literal> mag and min
                     filtering to be bilinear, and using <literal>GL_LINEAR_MIPMAP_LINEAR</literal>
                     to be trilinear.</para>
             memory overhead. The unexpected part is that this is actually a memory vs. performance
             tradeoff, as mipmapping improves performance.</para>
         <para>If a texture is going to be minified significantly, providing mipmaps is a performance
-            benefit. The reason is this: for a minified texture, the texture accesses for adjacent
-            fragment shaders will be very far apart. Texture sampling units like texture access
-            patterns where there is a high degree of locality, where adjacent fragment shaders
-            access texels that are very near one another. The farther apart they are, the less
-            useful the optimizations in the texture samplers are. Indeed, if they are far enough
-            apart, those optimizations start becoming performance penalties.</para>
+            benefit. The reason is this: for a highly minified texture, the texture accesses for
+            adjacent fragment shaders will be very far apart. Texture sampling units like texture
+            access patterns where there is a high degree of locality, where adjacent fragment
+            shaders access texels that are very near one another. The farther apart they are, the
+            less useful the optimizations in the texture samplers are. Indeed, if they are far
+            enough apart, those optimizations start becoming performance penalties.</para>
         <para>Textures that are used as lookup tables should generally not use mipmaps. But other
             kinds of textures, like those that provide surface details, can and should where
             reasonable.</para>
-        <para>While mipmapping is free, mipmap filtering,
-            <literal>GL_LINEAR_MIPMAP_LINEAR</literal>, is generally not free. But the cost of it is
-            rather small these days. For those textures where mipmap interpolation makes sense, it
-            should be used.</para>
+        <para>While mipmapping is free, linear mipmap filtering,
+                <literal>GL_LINEAR_MIPMAP_LINEAR</literal>, is generally not free. But the cost of
+            it is rather small these days. For those textures where mipmap interpolation makes
+            sense, it should be used.</para>
         <para>Anisotropic filtering is even more costly, as one might expect. After all, it means
             taking more texture samples to cover a particular texture area. However, anisotropic
             filtering is almost always implemented adaptively. This means that it will only take