Commits

Jason McKesson committed b43a669

Issue #65: fixed.
Copyediting.

Comments (0)

Files changed (6)

Documents/Illumination/Tutorial 13.xml

             <para>Our initial pipeline discussion ignored this shader stage, because it is an
                 entirely optional part of the pipeline. If a program object does not contain a
                 geometry shader, then OpenGL just does its normal stuff.</para>
-            <para>The most confusing thing about geometry shaders is that they do not shade geometry.
-                Vertex shaders take a vertex as input and write a vertex as output. Fragment shader
-                take a fragment as input and write a fragment as output.</para>
-            <para>Geometry shaders take a <emphasis>primitive</emphasis> as input and write one or
-                more primitives as output. By all rights, they should be called <quote>primitive
-                    shaders.</quote></para>
+            <para>The most confusing thing about geometry shaders is that they do not shade
+                geometry. Vertex shaders take a vertex as input and write a vertex as output.
+                Fragment shader take a fragment as input and potentially writes a fragment as
+                output. Geometry shaders take a <emphasis>primitive</emphasis> as input and write
+                zero or more primitives as output. By all rights, they should be called
+                    <quote>primitive shaders.</quote></para>
             <para>In any case, geometry shaders are invoked just after the hardware that collects
                 vertex shader outputs into a primitive, but before any clipping, transforming or
                 rasterization happens. Geometry shaders get the values output from multiple vertex

Documents/Positioning/Tutorial 03.xml

         <programlisting language="cpp">glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);</programlisting>
         <para>with this:</para>
         <programlisting language="cpp">glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STREAM_DRAW);</programlisting>
-        <para>GL_STATIC_DRAW tells OpenGL that you intend to only set the data in this buffer object
-            once. GL_STREAM_DRAW tells OpenGL that you intend to set this data constantly, generally
-            once per frame. These parameters do not mean <emphasis>anything</emphasis> with regard to
-            the API; they are simply hints to the OpenGL implementation. Proper use of these hints
-            can be crucial for getting good buffer object performance when making frequent changes.
-            We will see more of these hints later.</para>
+        <para><literal>GL_STATIC_DRAW</literal> tells OpenGL that you intend to only set the data in
+            this buffer object once. <literal>GL_STREAM_DRAW</literal> tells OpenGL that you intend
+            to set this data constantly, generally once per frame. These parameters do not mean
+                <emphasis>anything</emphasis> with regard to the API; they are simply hints to the
+            OpenGL implementation. Proper use of these hints can be crucial for getting good buffer
+            object performance when making frequent changes. We will see more of these hints
+            later.</para>
         <para>The rendering function now has become this:</para>
         <example>
             <title>Updating and Drawing the Vertex Data</title>

Documents/Texturing/Tutorial 14.xml

             linear-interpolation.</para>
         <para>What happens is that the <keycap>S</keycap> key switches meshes. The
                 <quote>fake</quote> mesh is not really a hallway; it is perfectly flat. It is more
-            or less a mesh who's vertex positions are in clip-space, after multiplying the original
-            hallway by the perspective matrix. The difference is that the clip-space W is not
-            present. It's just a flat object, an optical illusion. There is no perspective
-            information for the perspective-correction logic to key on, so it looks just like
-            window-space linear interpolation.</para>
+            or less a mesh who's vertex positions are in NDC-space, after multiplying the original
+            hallway by the perspective matrix. The difference is that there is no W coordinate; it's
+            just a flat object, an optical illusion. There is no perspective information for the
+            perspective-correction logic to key on, so it looks just like window-space linear
+            interpolation.</para>
         <para>The switch used to turn on or off perspective-correct interpolation is the
             interpolation qualifier. Previously, we said that there were three qualifiers:
                 <literal>flat</literal>, <literal>smooth</literal>, and
             rare, far more rare than <literal>flat</literal>. The important thing to understand from
             this section is that interpolation style matters. And <literal>smooth</literal> will be
             our default interpolation; fortunately, it is OpenGL's default too.</para>
-        <note>
-            <para>If you are interested in how the </para>
-        </note>
     </section>
     <section>
         <?dbhtml filename="Tut14 Texture Mapping.html" ?>
         <section>
             <title>Image From a File</title>
             <para>Our Gaussian texture comes from data we compute, but the specular shininess
-                texture is defined by a file. For this, we use the GLImg library. While the GLImg
-                library has functions that will directly create textures for us, it is instructive
-                to see a more manual process.</para>
+                texture is defined by a file. For this, we use the GL Image library that is part of
+                the OpenGL SDK. While the GL Image library has functions that will directly create
+                textures for us, it is instructive to see a more manual process.</para>
             <example>
                 <title>CreateShininessTexture function</title>
                 <programlisting language="cpp">void CreateShininessTexture()
     }
 }</programlisting>
             </example>
-            <para>The GLImg library has a number of loaders for different image formats; the one we
-                use in the first line of the try-block is the DDS loader. DDS stands for
+            <para>The GL Image library has a number of loaders for different image formats; the one
+                we use in the first line of the try-block is the DDS loader. DDS stands for
                     <quote>Direct Draw Surface,</quote> but it really has nothing to do with
                 Direct3D or DirectX. It is unique among image file formats </para>
             <para>The <classname>glimg::ImageSet</classname> object also supports all of the unique
                 vertices; those vertices are just in different positions.</para>
             <para>Mapping an object onto a 2D plane generally means finding a way to slice the
                 object into pieces that fit onto that plane. However, a torus is, topologically
-                speaking, equivalent to a plane. It is rolled into a tube, and bent around, so that
-                each side connects to its opposing side directly. Therefore, mapping a texture onto
-                this means reversing the process: cutting the torus lengthwise down an arbitrary
-                line, much like a car tire. Then, it is cut again along the side, so that it becomes
+                speaking, equivalent to a plane. This plane is rolled into a tube, and bent around,
+                so that each side connects to its opposing side directly. Therefore, mapping a
+                texture onto this means reversing the process. The tube is cut at one end, creating
+                a cylinder. Then, it is cut lengthwise, much like a car tire, and flattened out into
                 a plane.</para>
             <para>Exactly where those cuts need to be made is arbitrary. And because the specular
                 texture mirrors perfectly in the S and T directions, it is not possible to tell
-                exactly where the seams in the topology are.</para>
+                exactly where the seams in the topology are. But they do need to be there.</para>
             <para>What this does mean is that the vertices along the same have duplicate positions
                 and normals. Because they have different texture coordinates, their shared positions
                 and normals must be duplicated to match what OpenGL needs.</para>

Documents/Texturing/Tutorial 15.xml

             <para>The first two lines gets the old alignment, so that we can reset it once we are
                 finished. The last line uses <function>glPixelStorei</function>
             </para>
-            <para>Note that the GLImg library does provide an alignment value; it is part of the
+            <para>Note that the GL Image library does provide an alignment value; it is part of the
                     <classname>Dimensions</classname> structure of an image. We have simply not used
                 it yet. In the last tutorial, our row widths were aligned to 4 bytes, so there was
                 no chance of a problem. In this tutorial, our image data is 4-bytes in pixel size,
                 however.</para>
             <sidebar>
                 <title>Filtering Nomenclature</title>
-                <para>If you are familiar with texture filtering from other materials, you may have
+                <para>If you are familiar with texture filtering from other sources, you may have
                     heard the terms <quote>bilinear filtering</quote> and <quote>trilinear
                         filtering</quote> before. Indeed, you may know that linear filtering between
                     mipmap levels is commonly called trilinear filtering.</para>
                 <para>To understand the problem, it is important to understand what <quote>bilinear
                         filtering</quote> means. The <quote>bi</quote> in bilinear comes from doing
                     linear filtering along the two axes of a 2D texture. So there is linear
-                    filtering in the S and T directions (remember: proper OpenGL nomenclature calls
-                    the 2D texture coordinate axes S and T); since that is two directions, it is
-                    called <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes
+                    filtering in the S and T directions (remember: standard OpenGL nomenclature
+                    calls the 2D texture coordinate axes S and T); since that is two directions, it
+                    is called <quote>bilinear filtering</quote>. Thus <quote>trilinear</quote> comes
                     from adding a third direction of linear filtering: between mipmap levels.</para>
                 <para>Therefore, one could consider using <literal>GL_LINEAR</literal> mag and min
                     filtering to be bilinear, and using <literal>GL_LINEAR_MIPMAP_LINEAR</literal>

Documents/Texturing/Tutorial 16.xml

                 default. We were drawing vertices directly in clip-space. And since the W of those
                 vertices was 1, clip-space is identical to NDC space, and we therefore had an
                 orthographic projection.</para>
-            <para>It is often useful to want to draw something certain objects using window-space
-                pixel coordinates. This is commonly used for drawing text, but it can also be used
-                for displaying images exactly as they appear in a texture, as we do here. Since a
-                vertex shader must output clip-space values, the key is to develop a matrix that
-                transforms window-space coordinates into clip-space. OpenGL will handle the
-                conversion back to window-space internally.</para>
+            <para>It is often useful to want to draw certain objects using window-space pixel
+                coordinates. This is commonly used for drawing text, but it can also be used for
+                displaying images exactly as they appear in a texture, as we do here. Since a vertex
+                shader must output clip-space values, the key is to develop a matrix that transforms
+                window-space coordinates into clip-space. OpenGL will handle the conversion back to
+                window-space internally.</para>
             <para>This is done via the <function>reshape</function> function, as with most of our
                 projection matrix functions. The computation is actually quite simple.</para>
             <example>
                 working in pixel coordinates, we want to specify vertex positions with integer pixel
                 coordinates. This is what the vertex data for the two rectangles look like:</para>
             <programlisting language="cpp">const GLushort vertexData[] = {
-     90, 80,	0,		0,
-     90, 16,	0,		65535,
-    410, 80,	65535,	0,
-    410, 16,	65535,	65535,
+     90, 80,	0,     0,
+     90, 16,	0,     65535,
+    410, 80,	65535, 0,
+    410, 16,	65535, 65535,
     
-     90, 176,	0,		0,
-     90, 112,	0,		65535,
-    410, 176,	65535,	0,
-    410, 112,	65535,	65535,
+     90, 176,   0,     0,
+     90, 112,   0,     65535,
+    410, 176,   65535, 0,
+    410, 112,   65535, 65535,
 };</programlisting>
             <para>Our vertex data has two attributes: position and texture coordinates. Our
                 positions are 2D, as are our texture coordinates. These attributes are interleaved,
         <para>One important linear operation performed on texel values is filtering. Whether
             magnification or minification, non-nearest filtering does some kind of linear
             arithmetic. Since this is all handled by OpenGL, the question is this: if a texture is
-            in an sRGB format, does OpenGL's texture filtering <emphasis>before</emphasis>
+            in an sRGB format, does OpenGL's texture filtering occur <emphasis>before</emphasis>
             converting the texel values to linear RGB or after?</para>
         <para>The answer is quite simple: filtering comes after linearizing. So it does the right
             thing.</para>
                 </imageobject>
             </mediaobject>
         </figure>
-        <para>This works like the filtering tutorials. The <keycap>1</keycap> and<keycap>2</keycap>
+        <para>This works like the filtering tutorials. The <keycap>1</keycap> and <keycap>2</keycap>
             keys respectively select linear mipmap filtering and anisotropic filtering (using the
             maximum possible anisotropy).</para>
-        <para>We can see that this looks a bit different from the last time we saw it. The distanct
+        <para>We can see that this looks a bit different from the last time we saw it. The distant
             grey field is much darker than it was. This is because we are using sRGB colorspace
             textures. While the white and black are the same in sRGB (1.0 and 0.0 respectively), a
             50% blend of them (0.5) is not. The sRGB texture assumes the 0.5 color is the sRGB 0.5,
             </mediaobject>
         </figure>
         <para>It uses the standard mouse-based controls to move around. As before, the
-                <keycap>1</keycap> and<keycap>2</keycap> keys respectively select linear mipmap
+                <keycap>1</keycap> and <keycap>2</keycap> keys respectively select linear mipmap
             filtering and anisotropic filtering. The main feature is the non-shader-based gamma
             correction. This is enabled by default and can be toggled by pressing the
                 <keycap>SpaceBar</keycap>.</para>

Tut 03 OpenGLs Moving Triangle/cpuPositionOffset.cpp

 	glGenBuffers(1, &positionBufferObject);
 
 	glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
-	glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
+	glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STREAM_DRAW);
 	glBindBuffer(GL_ARRAY_BUFFER, 0);
 }