Commits

Jason McKesson  committed 5c0f458

Removed a lot of contractions. Not "it's", but all the n't ones.

  • Participants
  • Parent commits 7c5064d

Comments (0)

Files changed (19)

File Documents/Basics/Tutorial 00.xml

             <title>Vector Multiplication</title>
             <para>Vector multiplication is one of the few vector operations that has no real
                 geometric equivalent. To multiply a direction by another, or multiplying a position
-                by another position, doesn't really make sense. That doesn't mean that the numerical
-                equivalent isn't useful, though.</para>
+                by another position, does not really make sense. That does not mean that the numerical
+                equivalent is not useful, though.</para>
         </formalpara>
         <para>Multiplying two vectors numerically is simply component-wise multiplication, much like
             vector addition.</para>
     <section>
         <?dbhtml filename="Intro Graphics and Rendering.html"?>
         <title>Graphics and Rendering</title>
-        <para>This is an overview of the process of rendering. Do not worry if you don't understand
+        <para>This is an overview of the process of rendering. Do not worry if you do not understand
             everything right away; every step will be covered in lavish detail in later
             tutorials.</para>
         <para>Everything you see on your computer's screen, even the text you are reading right now
                     First, a pixel's final color can be composed of the results of multiple
                     fragments generated by multiple <emphasis>samples</emphasis> within a single
                     pixel. This is a common technique to remove jagged edges of triangles. Also, the
-                    fragment data has not been written to the image, so it isn't a pixel yet.
+                    fragment data has not been written to the image, so it is not a pixel yet.
                     Indeed, the fragment processing step can conditionally prevent rendering of a
                     fragment based on arbitrary computations. Thus a <quote>pixel</quote> in D3D
                     parlance may never actually become a pixel at all.</para>

File Documents/Basics/Tutorial 01.xml

             </example>
             <para>The first line creates the buffer object, storing the handle to the object in the
                 global variable <varname>positionBufferObject</varname>. Though the object now
-                exists, it doesn't own any memory yet. That is because we have not allocated any
+                exists, it does not own any memory yet. That is because we have not allocated any
                 with this object.</para>
             <para>The <function>glBindBuffer</function> function binds the newly-created buffer
                 object to the <literal>GL_ARRAY_BUFFER</literal> binding target. As mentioned in
                     <varname>position</varname> was assigned directly in the shader itself. There
                 are other ways to assign attribute indices to attributes besides
                     <literal>layout(location = #)</literal>. OpenGL will even assign an attribute
-                index if you don't use any of them. Therefore, it is possible that you may not know
+                index if you do not use any of them. Therefore, it is possible that you may not know
                 the attribute index of an attribute. If you need to query the attribute index, you
                 may call <function>glGetAttribLocation</function> with the program object and a
                 string containing the attribute's name.</para>
             assets when OpenGL is shut down as part of window deactivation.</para>
         <para>It is generally good form to delete objects that you create before shutting down
             OpenGL. And you certainly should do it if you encapsulate objects in C++ objects, such
-            that destructors will delete the OpenGL objects. But it isn't strictly necessary.</para>
+            that destructors will delete the OpenGL objects. But it is not strictly necessary.</para>
     </section>
     <section>
         <?dbhtml filename="Tut01 In Review.html" ?>

File Documents/Basics/Tutorial 02.xml

             <para>The third parameter to <function>mix</function> must be on the range [0, 1].
                 However, GLSL will not check this or do the clamping for you. If it is not on this
                 range, the result of the <function>mix</function> function will be undefined.
-                    <quote>Undefined</quote> is the OpenGL shorthand for, <quote>I don't know, but
-                    it's probably not what you want.</quote></para>
+                    <quote>Undefined</quote> is the OpenGL shorthand for, <quote>I do not know, but
+                    it is probably not what you want.</quote></para>
         </note>
         <para>We get the following image:</para>
         <figure>

File Documents/History of Graphics Hardware.xml

         <para>It's interesting to note that the simplicity of the fragment processing stage owes as
             much to the lack of inputs as anything else. When the only values you have to work with
             are the color from a texture lookup and the per-vertex interpolated color, there really
-            isn't all that much you could do with them. Indeed, as we will see in the next phases of
+            is not all that much you could do with them. Indeed, as we will see in the next phases of
             hardware, increases in the complexity of the fragment processor was a reaction to
             increasing the number of inputs <emphasis>to</emphasis> the fragment processor. When you
             have more data to work with, you need more complex operations to make that data
             point.</para>
         <para>This becomes less useful if you want to add a specular term. The specular absorption
             and diffuse absorption are not necessarily the same, after all. And while you may not
-            need to have a specular texture, you don't want to add the specular component to the
+            need to have a specular texture, you do not want to add the specular component to the
             diffuse component <emphasis>before</emphasis> you multiply by their respective colors.
             You want to do the addition afterwards.</para>
         <para>This is simply not possible if you have only one per-vertex color. But it becomes
             After all, until TNT, that was the only way to apply multiple textures to a single
             surface. And even with TNT, it had a pretty confining limit of two textures and two
             opcodes.</para>
-        <para>This was powerful, but quite limited. Two opcodes really wasn't enough.</para>
+        <para>This was powerful, but quite limited. Two opcodes really was not enough.</para>
         <para>The TNT cards also provided something else: 32-bit framebuffers and depth buffers.
             While the Voodoo cards used high-precision math internally, they still wrote to 16-bit
             framebuffers, using a technique called dithering to make them look like higher
             <para>Embedded platforms tend to play to their tile-based renderer's strengths. Memory,
                 particularly high-bandwidth memory, eats up power; having less memory means
                 longer-lasting mobile devices. Embedded devices tend to use smaller resolutions,
-                which their platform excels at. And with low resolutions, you aren't trying to push
+                which their platform excels at. And with low resolutions, you are not trying to push
                 nearly as much geometry.</para>
             <para>Thanks to these facts, PowerVR graphics chips power the vast majority of mobile
                 platforms that have any 3D rendering in them. Just about every iPhone, Droid, iPad,
             Compressed texture formats also appeared on the scene.</para>
         <para>What we see thus far as we take steps towards true programmability is that increased
             complexity in fragment processing starts pushing for other needs. The addition of a dot
-            product allows lighting computations to take place per-fragment. But you can't have full
+            product allows lighting computations to take place per-fragment. But you cannot have full
             texture-space bump mapping because of the lack of a normal/binormal/tangent matrix to
             transform vectors to texture space. Cubemaps allow you to do arbitrary reflections, but
             computing reflection directions per-vertex requires interpolating reflection normals,
-            which doesn't work very well over large polygons.</para>
+            which does not work very well over large polygons.</para>
         <para>This also saw the introduction of something called a rectangle texture. This was
             something of an odd duck, that still remains in current day. It was a way of creating a
             texture of arbitrary size; until then, textures were limited to powers of two in size
         <para>At this point, the holy grail of programmability at the fragment level was dependent
             texture access. That is, being able to access a texture, do some arbitrary computations
             on it, and then access another texture with the result. The GeForce 3 had some
-            facilities for that, but they weren't very good ones.</para>
+            facilities for that, but they were not very good ones.</para>
         <para>The GeForce 3 used 8 register combiner stages instead of the 2 that the earlier cards
             used. Their register files were extended to support two extra texture colors and a few
             more tricks. But the main change was something that, in OpenGL terminology, would be
             coordinate.</para>
         <para>But using that required spending a total of 3 texture shader stages. Which meant you
             get a bump map and a normalization cubemap only; there was no room for a diffuse map in
-            that pass. It also didn't perform very well; the texture shader functions were quite
+            that pass. It also did not perform very well; the texture shader functions were quite
             expensive.</para>
         <para>True programmability came to the fragment shader from ATI, with the Radeon 8500,
             released in late 2001.</para>
             <para>NVIDIA and ATI released entirely separate proprietary extensions for specifying
                 fragment shaders. NVIDIA's extensions built on the register combiner extension they
                 released with the GeForce 256. They were completely incompatible. And worse, they
-                weren't even string-based.</para>
+                were not even string-based.</para>
             <para>Imagine having to call a C++ function to write every opcode of a shader. Now
                 imagine having to call <emphasis>three</emphasis> functions to write each opcode.
                 That's what using those APIs was like.</para>
             all D3D 9 applications, making them run much slower than their ATI counterparts (the
             9700 always used 24-bit precision math).</para>
         <para>Things were no better in OpenGL land. The two competing unified fragment processing
-            APIs, GLSL and an assembly-like fragment shader, didn't have precision specifications
+            APIs, GLSL and an assembly-like fragment shader, did not have precision specifications
             either. Only NVIDIA's proprietary extension for fragment shaders provided that, and
             developers were less likely to use it. Especially with the head start that the 9700
             gained in the market by the FX being released late.</para>
         <para>This level of hardware saw the gaining of a number of different features. sRGB
             textures and framebuffers appeared, as did floating-point textures. Blending support for
             floating-point framebuffers was somewhat spotty; some hardware could do it only for
-            16-bit floating-point, some couldn't do it at all. The restrictions of power-of-two
+            16-bit floating-point, some could not do it at all. The restrictions of power-of-two
             texture sizes was also lifted, to varying degrees. None of ATI's hardware of this era
             fully supported this when used with mipmapping, but NVIDIA's hardware from the GeForce 6
             and above did.</para>
                 primitive, based on the values of the primitive being tessellated. The geometry
                 shader still exists; it is executed after the final tessellation shader
                 stage.</para>
-            <para>Tessellation is not covered in this book for a few reasons. First, there isn't as
+            <para>Tessellation is not covered in this book for a few reasons. First, there is not as
                 much hardware out there that supports it. Sticking to OpenGL 3.3 meant casting a
                 wider net; requiring OpenGL 4.1 (which includes tessellation) would have meant fewer
                 people could run those tutorials.</para>
-            <para>Second, tessellation isn't that important. That's not to say that it isn't
-                important or a worthwhile feature. But it really isn't something that matters a
+            <para>Second, tessellation is not that important. That's not to say that it is not
+                important or a worthwhile feature. But it really is not something that matters a
                 great deal.</para>
         </sidebar>
     </section>

File Documents/Illumination/Tutorial 09.xml

         <para>A surface looks blue under white light because the surface absorbs all non-blue parts
             of the light and only reflects the blue parts. If one were to shine a red light on the
             surface, the surface would appear very dark, as the surface absorbs non-blue light, and
-            the red light doesn't have much blue light in it.</para>
+            the red light does not have much blue light in it.</para>
         <figure>
             <title>Surface Light Absorption</title>
             <mediaobject>
         <section>
             <title>Gouraud Shading</title>
             <para>So each vertex has a normal. That is useful, but it is not sufficient, for one
-                simple reason. We don't draw the vertices of triangles; we draw the interior of a
+                simple reason. We do not draw the vertices of triangles; we draw the interior of a
                 triangle through rasterization.</para>
             <para>There are several ways to go about computing lighting across the surface of a
                 triangle. The simplest to code, and most efficient for rendering, is to perform the
                 fast process, and not having to compute lighting at every fragment generated from
                 the triangle raises the performance substantially.</para>
             <para>That being said, modern games have essentially abandoned this technique. Part of
-                that is because the per-fragment computation isn't as slow and limited as it used to
+                that is because the per-fragment computation is not as slow and limited as it used to
                 be. And part of it is simply that games tend to not use just diffuse lighting
                 anymore, so the Gouraud approximation is more noticeably inaccurate.</para>
         </section>
                     direction of the camera.</para>
                 <para>The code for these are contained in the framework objects
                         <type>MousePole</type> and <type>ObjectPole</type>. The source code in them
-                    is, outside of how FreeGLUT handles mouse input, nothing that hasn't been seen
+                    is, outside of how FreeGLUT handles mouse input, nothing that has not been seen
                     previously.</para>
             </sidebar>
             <para>Pressing the <keycap>Spacebar</keycap> will switch between a cylinder that has a
                 scaling a direction is a reasonable operation, translating it is not. Now, we could
                 just adjust the matrix to remove all translations before transforming our light into
                 camera space. But that's highly unnecessary; we can simply put 0.0 in the fourth
-                component of the direction. This will do the same job, only we don't have to mess
+                component of the direction. This will do the same job, only we do not have to mess
                 with the matrix to do so.</para>
             <para>This also allows us to use the same transformation matrix for vectors as for
                 positions.</para>
                 inputs.</para>
             <para>We actually draw two different kinds of cylinders, based on user input. The
                 colored cylinder is tinted red and is the initial cylinder. The white cylinder uses
-                a vertex program that doesn't use per-vertex colors for the diffuse color; instead,
+                a vertex program that does not use per-vertex colors for the diffuse color; instead,
                 it uses a hard-coded color of full white. These both come from the same mesh file,
                 but have special names to differentiate between them.</para>
-            <para>What changes is that the <quote>flat</quote> mesh doesn't pass the color vertex
+            <para>What changes is that the <quote>flat</quote> mesh does not pass the color vertex
                 attribute and the <quote>tint</quote> mesh does.</para>
             <para>Other than which program is used to render them and what mesh name they use, they
                 are both rendered similarly.</para>
                 update all programs that use the projection matrix just by changing the buffer
                 object.</para>
             <para>The first line of <function>main</function> simply does the position transforms we
-                need to position our vertices, as we have seen before. We don't need to store the
+                need to position our vertices, as we have seen before. We do not need to store the
                 camera-space position, so we can do the entire transformation in a single
                 step.</para>
             <para>The next line takes our normal and transforms it by the model-to-camera matrix
             <para>This is important, because the cosine of the angle of incidence can be negative.
                 This is for values which are pointed directly away from the light, such as the
                 underside of the ground plane, or any part of the cylinder that is facing away from
-                the light. The lighting computations don't make sense with this value being
+                the light. The lighting computations do not make sense with this value being
                 negative, so the clamping is necessary.</para>
             <para>After computing that value, we multiply it by the light intensity and diffuse
                 color. This result is then passed to the interpolated output color. The fragment
                     </imageobject>
                 </mediaobject>
             </equation>
-            <para>This doesn't require any messy cosine transcendental math computations. This
-                doesn't require using trigonometry to compute the angle between the two vectors.
+            <para>This does not require any messy cosine transcendental math computations. This
+                does not require using trigonometry to compute the angle between the two vectors.
                 Simple multiplications and additions; most graphics hardware can do billions of
                 these a second.</para>
             <para>Obviously, the GLSL function <function>dot</function> computes the vector dot
                 <emphasis>is</emphasis> its transpose. Also, taking the inverse of a matrix twice
             results in the original matrix. Therefore, you can express any pure-rotation matrix as
             the inverse transpose of itself, without affecting the matrix. Since the inverse is its
-            transpose, and doing a transpose twice on a matrix doesn't change its value, the
+            transpose, and doing a transpose twice on a matrix does not change its value, the
             inverse-transpose of a rotation matrix is a no-op.</para>
         <para>Also, since the values in pure-scale matrices are along the diagonal, a transpose
             operation on scale matrices does nothing. With these two facts in hand, we can
             are not used. At least, not in the output mesh. It's better to just get the modeller to
             adjust the model as needed in their modelling application.</para>
         <para>Uniform scales are more commonly used. So you still need to normalize the normal after
-            transforming it with the model-to-camera matrix, even if you aren't using the
+            transforming it with the model-to-camera matrix, even if you are not using the
             inverse-transpose.</para>
     </section>
     <section>
             technique: <glossterm>ambient lighting.</glossterm></para>
         <para>The ambient lighting <quote>model</quote><footnote>
                 <para>I put the word model in quotations because ambient lighting is so divorced
-                    from anything in reality that it doesn't really deserve to be called a model.
+                    from anything in reality that it does not really deserve to be called a model.
                     That being said, just because it does not actually model global illumination in
-                    any real way doesn't mean that it isn't <emphasis>useful</emphasis>.</para>
+                    any real way does not mean that it is not <emphasis>useful</emphasis>.</para>
             </footnote> is quite simple. It assumes that, on every object in the scene, there is a
             light of a certain intensity that emanates from everywhere. It comes from all directions
             equally, so there is no angle of incidence in our diffuse calculation. It is simply the
         <para>Thus far, we have used light intensity like a color. We clamp it to the range [0, 1].
             We even make sure that combined intensities from different lighting models always are
             within that range.</para>
-        <para>What this effectively means is that the light doesn't brighten the scene. The scene
+        <para>What this effectively means is that the light does not brighten the scene. The scene
             without the light is fully lit; our lights simply <emphasis>darken</emphasis> parts of
             the scene. They take the diffuse color and make it smaller, because multiplying with a
             number on the range [0, 1] can only ever make a number smaller (or the same).</para>
         <para>The concept of lighting darkening a scene was common for a long time in real-time
             applications. It is another part of the reason why, for so many years, 3D games tended
             to avoid the bright outdoors, preferring corridors with darker lighting. The sun is a
-            powerful light source; binding lighting intensity to the [0, 1] range doesn't lead to a
+            powerful light source; binding lighting intensity to the [0, 1] range does not lead to a
             realistic vision of the outdoors.</para>
         <para>One obvious way to correct this is to take all of the diffuse colors and divide them by
             a value like 2. Then increase your light intensity range from [0, 1] to [0, 2]. This is
             a workable solution, to some extent. Lights can be brighter than 1.0, and lighting can
             serve to increase the brightness of the diffuse color as well as decrease it. Of course,
-            2 is just as far from infinity as 1 is, so it isn't technically any closer to proper
+            2 is just as far from infinity as 1 is, so it is not technically any closer to proper
             lighting. But it is an improvement.</para>
         <para>This technique does have its flaws. As we will see in later tutorials, you often will
             want to render the same object multiple times and combine the results. This method
-            doesn't work when adding contributions from multiple light sources this way, unless you
+            does not work when adding contributions from multiple light sources this way, unless you
             limit the sum total of all lights to the same value. Just as we did for diffuse when
             combining it with an ambient term.</para>
         <para>It is certainly possible on modern hardware to model light intensity correctly. And we
             will eventually do that. But these more primitive lighting models do still have their
             uses in some cases. And it is illustrative to see what an improvement having proper
-            lighting intensity can make on the result. The lighting computations themselves don't
+            lighting intensity can make on the result. The lighting computations themselves do not
             change; what changes are the numbers fed into them.</para>
     </section>
     <section>

File Documents/Illumination/Tutorial 10.xml

                 <keycap>k</keycap> keys move the light up and down respectively, while the
                 <keycap>j</keycap> and <keycap>l</keycap> will decrease and increase the light's
             radius. Holding shift with these keys will move in smaller increments.</para>
-        <para>Most of the code is nothing we haven't seen elsewhere. The main changes are at the top
+        <para>Most of the code is nothing we have not seen elsewhere. The main changes are at the top
             of the rendering function.</para>
         <example>
             <title>Per-Vertex Point Light Rendering</title>
             bright. But if you move the light to the middle of the cylinder, far the top or bottom
             vertices, then the illumination will be much dimmer.</para>
         <para>This is not the only problem with doing per-vertex lighting. For example, run the
-            tutorial again and don't move the light. Just watch how the light behaves on the
+            tutorial again and do not move the light. Just watch how the light behaves on the
             cylinder's surface as it animates around. Unlike with directional lighting, you can very
             easily see the triangles on the cylinder's surface. Though the per-vertex computations
-            aren't helping matters, the main problem here has to do with interpolating the
+            are not helping matters, the main problem here has to do with interpolating the
             values.</para>
         <para>If you move the light source farther away, you can see that the triangles smooth out
             and become indistinct from one another. But this is simply because, if the light source
         </informalequation>
         <para>This means that if L is constant, linearly interpolating N is exactly equivalent to
             linearly interpolating the results of the lighting equation. And the addition of the
-            ambient term doesn't change this, since it is a constant and would not be affected by
+            ambient term does not change this, since it is a constant and would not be affected by
             linear interpolation.</para>
         <para>When doing point lighting, you would have to interpolate both N and L. And that does
             not yield the same results as linearly interpolating the two colors you get from the
-            lighting equation. This is a big part of the reason why the cylinder doesn't look
+            lighting equation. This is a big part of the reason why the cylinder does not look
             correct.</para>
         <para>The more physically correct method of lighting is to perform lighting at every rendered
             pixel. To do that, we would have to interpolate the lighting parameters across the
             another neighboring triangle.</para>
         <para>In our case, this means that for points along the main diagonal, the light direction
             will only be composed of the direction values from the two vertices on that diagonal.
-            This is not good. This wouldn't be much of a problem if the light direction did not
+            This is not good. This would not be much of a problem if the light direction did not
             change much along the surface, but with large triangles (relative to how close the light
             is to them), that is simply not the case.</para>
         <para>Since we cannot interpolate the light direction very well, we need to interpolate
     diffuseColor = inDiffuseColor;
 }</programlisting>
         </example>
-        <para>Since our lighting is done in the fragment shader, there isn't much to do except pass
+        <para>Since our lighting is done in the fragment shader, there is not much to do except pass
             variables through and set the output clip-space position. The version that takes no
             diffuse color just passes a <type>vec4</type> containing just 1.0.</para>
         <para>The fragment shader is much more interesting:</para>
                 </mediaobject>
             </figure>
             <para>Notice the vertical bands on the cylinder. This are reminiscent of the same
-                interpolation problem we had before. Wasn't doing lighting at the fragment level
+                interpolation problem we had before. Was not doing lighting at the fragment level
                 supposed to fix this?</para>
             <para>Actually, this is a completely different problem. And it is one that is
                 essentially impossible to solve. Well, not without changing our rendered
             <para>The lighting computations are correct for the vertices near the edges of a
                 triangle. But the farther from the edges you get, the more incorrect it
                 becomes.</para>
-            <para>The key point is that there isn't much we can do about this problem. The only
+            <para>The key point is that there is not much we can do about this problem. The only
                 solution is to add more vertices to the approximation of the cylinder. It should
                 also be noted that adding more triangles would also make per-vertex lighting look
                 more correct. If you add infinitely many vertices to the cylinder, then the results
             </mediaobject>
         </informalequation>
         <para>This equation computes physically realistic light attenuation for point-lights. But it
-            often doesn't look very good. Lights seem to have a much sharper intensity falloff than
+            often does not look very good. Lights seem to have a much sharper intensity falloff than
             one would expect.</para>
         <para>There is a reason for this, but it is not one we are ready to get into quite yet. What
             is often done is to simply use the inverse rather than the inverse-square of the
                 space. And while this is a perfectly useful space to do lighting in, model space is
                 not world space.</para>
             <para>We want to specify the attenuation constant factor in terms of world space
-                distances. But we aren't dealing in world space; we are in model space. And model
+                distances. But we are not dealing in world space; we are in model space. And model
                 space distances are, naturally, in model space, which may well be scaled relative to
                 world space. Here, any kind of scale in the model-to-world transform is a problem,
                 not just non-uniform scales. Although if there was a uniform scale, we could apply
                 can do our lighting in that space.</para>
             <para>Doing it in camera space requires computing a camera space position and passing it
                 to the fragment shader to be interpolated. And while we could do this, that's not
-                clever enough. Isn't there some way to get around that?</para>
+                clever enough. Is not there some way to get around that?</para>
             <para>Yes, there is. Recall <varname>gl_FragCoord</varname>, an intrinsic value given to
                 every fragment shader. It represents the location of the fragment in window space.
                 So instead of transforming from model space to camera space, we will transform from
             <para>GLSL has the <type>bool</type> type just like C++ does. The
                     <literal>true</literal> and <literal>false</literal> values work just like C++'s
                 equivalents. Where they differ is that GLSL also has vectors of bools, called
-                    <type>bvec#</type>, where the # can be 2, 3, or 4. We don't use that here, but
+                    <type>bvec#</type>, where the # can be 2, 3, or 4. We do not use that here, but
                 it is important to note.</para>
             <para>OpenGL's API, however, is still a C API. And C (at least, pre-C99) has no
                     <type>bool</type> type. Uploading a boolean value to a shader looks like

File Documents/Illumination/Tutorial 11.xml

                 Phong specular lighting model. Because of this, the term <quote>Phong
                     shading</quote> has fallen out of common usage.</para>
         </sidebar>
-        <para>The Phong model is not really based on anything real. It doesn't deal in microfacet
+        <para>The Phong model is not really based on anything real. It does not deal in microfacet
             distributions at all. What the Phong model is is something that looks decent enough and
             is cheap to compute. It approximates a statistical distribution of microfacets, but it
             is not really based on anything real.</para>

File Documents/Illumination/Tutorial 12.xml

             <para>These properties are all stored in a uniform buffer object. We have seen these
                 before for data that is shared among several programs; here, we use it to quickly
                 change sets of values. These material properties do not change with time; we set
-                them once and don't change them ever again. This is primarily for demonstration
+                them once and do not change them ever again. This is primarily for demonstration
                 purposes, but it could have a practical effect.</para>
             <para>Each object's material data is defined as the following struct:</para>
             <example>
                 the appropriate section in the OpenGL specification to see why). Note the global
                 definition of <quote>std140</quote> layout; this sets all uniform blocks to use
                     <quote>std140</quote> layout unless they specifically override it. That way, we
-                don't have to write <quote>layout(std140)</quote> for each of the three uniform
+                do not have to write <quote>layout(std140)</quote> for each of the three uniform
                 blocks we use in each shader file.</para>
             <para>Also, note the use of <literal>Mtl</literal> at the foot of the uniform block
                 definition. This is called the <glossterm>instance name</glossterm> of an interface
                 uniform block's data is within the buffer object, be aligned to a specific value.
                 That is, the begining of a uniform block within a uniform buffer must be a multiple
                 of a specific value. 0 works, of course, but since we have more than one block
-                within a uniform buffer, they can't all start at the buffer's beginning.</para>
+                within a uniform buffer, they cannot all start at the buffer's beginning.</para>
             <para>What is this value, you may ask? Welcome to the world of implementation-dependent
                 values. This means that it can (and most certainly will) change depending on what
                 platform you're running on. This code was tested on two different hardware
             <para>Again, there is the need for a bit of padding in the C++ version of the struct.
                 Also, you might notice that we have both arrays and structs in GLSL for the first
                 time. They work pretty much like C/C++ structs and arrays (outside of pointer logic,
-                since GLSL doesn't have pointers), though arrays have certain caveats.</para>
+                since GLSL does not have pointers), though arrays have certain caveats.</para>
         </section>
         <section>
             <title>Many Lights Shader</title>
                 values passed by the vertex shader. The other two use the material's diffuse color.
                 The other difference is that two do specular reflection computations, and the others
                 do not. This represents the variety of our materials.</para>
-            <para>Overall, the code is nothing you haven't seen before. We use Gaussian specular and
+            <para>Overall, the code is nothing you have not seen before. We use Gaussian specular and
                 an inverse-squared attenuation, in order to be as physically correct as we currently
                 can be. One of the big differences is in the <function>main</function>
                 function.</para>
                 each light and compute the lighting for it, adding it into our accumulated value.
                 Loops and arrays are generally fine.</para>
             <para>The other trick is how we deal with positional and directional lights. The
-                    <classname>PerLight</classname> structure doesn't explicitly say whether a light
+                    <classname>PerLight</classname> structure does not explicitly say whether a light
                 is positional or directional. However, the W component of the
                     <varname>cameraSpaceLightPos</varname> is what we use to differentiate them;
                 this is a time-honored technique. If the W component is 0.0, then it is a
                 directional light; otherwise, it is a point light.</para>
             <para>The only difference between directional and point lights in the lighting function
-                are attenuation (directional lights don't use attenuation) and how the light
+                are attenuation (directional lights do not use attenuation) and how the light
                 direction is computed. So we simply compute these based on the W component:</para>
             <programlisting language="glsl">vec3 lightDir;
 vec4 lightIntensity;
             <para>There are a few problems with our current lighting setup. It looks (mostly) fine
                 in daylight. The moving point lights have a small visual effect, but mostly they're
                 not very visible. And this is what one would expect in broad daylight; flashlights
-                don't make much impact in the day.</para>
+                do not make much impact in the day.</para>
             <para>But at night, everything is exceedingly dark. The point lights, the only active
                 source of illumination, are all too dim to be very visible. The terrain almost
                 completely blends into the black background.</para>
             <para>Notice the patch of iridescent green. This is <glossterm>light
                     clipping</glossterm> or light clamping, and it is usually a very undesirable
                 outcome. It happens when the computed light intensity falls outside of the [0, 1]
-                range, usually in the positive direction (like in this case). The object can't be
+                range, usually in the positive direction (like in this case). The object cannot be
                 shown to be brighter, so it becomes a solid color that loses all detail.</para>
             <para>The obvious solution to our lighting problem is to simply change the point light
                 intensity based on the time of day. However, this is not realistic; flashlights
-                don't actually get brighter at night. So if we have to do something that
-                antithetical to reality, then there's probably some aspect of reality that we aren't
+                do not actually get brighter at night. So if we have to do something that
+                antithetical to reality, then there's probably some aspect of reality that we are not
                 properly modelling.</para>
         </section>
     </section>
             we multiplied the sun and ambient intensities by 3 in the last section, we were
             increasing the brightness by 3x. Multiplying the maximum intensity by 3 had the effect
             of reducing the overall brightness by 3x.</para>
-        <para>There's just one problem: your screen doesn't work that way. Time for a short history
+        <para>There's just one problem: your screen does not work that way. Time for a short history
             of television/monitors.</para>
         <para>The original televisions used an electron gun fired at a phosphor surface to generate
             light and images; this is called a <acronym>CRT</acronym> display (cathode ray tube).
             displayed linearly, as it was originally captured by the camera.</para>
         <para>The term for this process, de-linearizing an image to compensate for a non-linear
             display, is called <glossterm>gamma correction</glossterm>.</para>
-        <para>You may be wondering why this matters. After all, odds are, you don't have a CRT-based
+        <para>You may be wondering why this matters. After all, odds are, you do not have a CRT-based
             monitor; you probably have some form of LCD, plasma, LED, or similar technology. So what
             does the vagaries of CRT monitors matter to you?</para>
         <para>Because gamma correction is everywhere. It's in DVDs, video-tapes, and Blu-Ray discs.
             Every digital camera does it. And this is how it has been for a long time. Because of
-            that, you couldn't sell an LCD monitor that tried to do linear color reproduction;
+            that, you could not sell an LCD monitor that tried to do linear color reproduction;
             nobody would buy it because all media for it (including your OS) was designed and
             written expecting CRT-style non-linear displays.</para>
         <para>This means that every non-CRT display must mimic the CRT's non-linearity; this is
             built into the basic video processing logic of every display device.</para>
         <para>So for twelve tutorials now, we have been outputting linear RGB values to a display
             device that expects gamma-corrected non-linear RGB values. But before we started doing
-            lighting, we were just picking nice-looking colors, so it didn't matter. Now that we're
+            lighting, we were just picking nice-looking colors, so it did not matter. Now that we're
             doing something vaguely realistic, we need to perform gamma-correction. This will let us
             see what we've <emphasis>actually</emphasis> been rendering, instead of what our
             monitor's gamma-correction circuitry has been mangling.</para>
                 previous tutorials. Let's look at some code.</para>
             <para>The gamma value is an odd kind of value. Conceptually, it has nothing to do with
                 lighting, per-se. It is a global value across many shaders, so it should be in a UBO
-                somewhere. But it isn't a material parameter; it doesn't change from object to
+                somewhere. But it is not a material parameter; it does not change from object to
                 object. In this tutorial, we stick it in the <classname>Light</classname> uniform
                 block and the <classname>LightBlockGamma</classname> struct. Again, we steal a float
                 from the padding:</para>
 };</programlisting>
             </example>
             <para>For the sake of clarity in this tutorial, we send the actual gamma value. For
-                performance's sake, we ought send 1/gamma, so that we don't have to needlessly do a
+                performance's sake, we ought send 1/gamma, so that we do not have to needlessly do a
                 division in every fragment.</para>
             <para>The gamma is applied in the fragment shader as follows:</para>
             <example>
             <para>This is more like it.</para>
             <para>If there is one point you should learn from this exercise, it is this: make sure
                 that you implement gamma correction and HDR <emphasis>before</emphasis> trying to
-                light your scenes. If you don't, then you may have to adjust all of the lighting
+                light your scenes. If you do not, then you may have to adjust all of the lighting
                 parameters again, and you may need to change materials as well. In this case, it
-                wasn't even possible to use simple corrective math on the lighting environment to
+                was not even possible to use simple corrective math on the lighting environment to
                 make it work right. This lighting environment was developed essentially from
                 scratch.</para>
             <para>One thing we can notice when looking at the gamma correct lighting is that proper
                     </imageobject>
                 </mediaobject>
             </figure>
-            <para>These two images use the HDR lighting; the one on the left doesn't have gamma
+            <para>These two images use the HDR lighting; the one on the left does not have gamma
                 correction, and the one on the right does. Notice how easy it is to make out the
                 details in the hills near the triangle on the right.</para>
             <para>Looking at the gamma function, this makes sense. Without proper gamma correction,
                 fully half of the linearRGB range is shoved into the bottom one-fifth of the
-                available light intensity. That doesn't leave much room for areas that are dark but
+                available light intensity. That does not leave much room for areas that are dark but
                 not too dark to see anything.</para>
             <para>As such, gamma correction is a key process for producing color-accurate rendered
                 images.</para>
             <title>Further Research</title>
             <para>HDR is a pretty large field. This tutorial covered perhaps the simplest form of
                 tone mapping, but there are many equations one can use. There are tone mapping
-                functions that map the full [0, ∞) range to [0, 1]. This wouldn't be useful for a
+                functions that map the full [0, ∞) range to [0, 1]. This would not be useful for a
                 scene that needs a dynamic aperture size, but if the aperture is static, it does
                 allow the use of a large range of lighting values.</para>
             <para>When doing tone mapping with some form of variable aperture setting, computing the

File Documents/Illumination/Tutorial 13.xml

         accurate one, but it is deception all the same. We are not rendering round objects; we
         simply use lighting and interpolation of surface characteristics to make an object appear
         round. Sometimes we have artifacts or optical illusions that show the lie for what it is.
-        Even when the lie is near-perfect, the geometry of a model still doesn't correspond to what
+        Even when the lie is near-perfect, the geometry of a model still does not correspond to what
         the lighting makes the geometry appear to be.</para>
     <para>In this tutorial, we will be looking at the ultimate expression of this lie. We will use
         lighting computations to make an object appear to be something entirely different from its
             <title>Racketeering Rasterization</title>
             <para>Our lighting equations in the past needed only a position and normal in
                 camera-space (as well as other material and lighting parameters) in order to work.
-                So the job of the fragment shader is to provide them. Even though they don't
+                So the job of the fragment shader is to provide them. Even though they do not
                 correspond to those of the actual triangles in any way.</para>
             <para>Here are the salient new parts of the fragment shader for impostors:</para>
             <example>
                 distance check. Since the size of the square is equal to the radius of the sphere,
                 if the distance of the <varname>mapping</varname> variable from its (0, 0) point is
                 greater than 1, then we know that this point is off of the sphere.</para>
-            <para>Here, we use a clever way of computing the length; we don't. Instead, we compute
+            <para>Here, we use a clever way of computing the length; we do not. Instead, we compute
                 the square of the length. We know that if <inlineequation>
                     <mathphrase>X<superscript>2</superscript> >
                         Y<superscript>2</superscript></mathphrase>
                     data that may be different between each shader. This is also why branches in
                     shaders will often execute both sides rather than actually branching; it keeps
                     the shader logic simpler.</para>
-                <para>However, that doesn't mean <literal>discard</literal> is without use for
+                <para>However, that does not mean <literal>discard</literal> is without use for
                     stopping unwanted processing. If all of the shaders that are running together
                     hit a <literal>discard</literal>, then they can all be aborted with no problems.
                     And hardware often does this where possible.</para>
             </mediaobject>
         </figure>
         <para>What's going on here? The mesh sphere seems to be wider than the impostor sphere. This
-            must mean that the mesh sphere is doing something our impostor isn't. Does this have to
+            must mean that the mesh sphere is doing something our impostor is not. Does this have to
             do with the inaccuracy of the mesh sphere?</para>
         <para>Quite the opposite, in fact. The mesh sphere is correct. The problem is that our
             impostor is too simple.</para>
         <para>Look back at how we did our computations. We map a sphere down to a flat surface. The
             problem is that <quote>down</quote> in this case is in the camera-space Z direction. The
-            mapping between the surface and the sphere is static; it doesn't change based on the
+            mapping between the surface and the sphere is static; it does not change based on the
             viewing angle.</para>
         <para>Consider this 2D case:</para>
         <figure>
             </mediaobject>
         </figure>
         <para>The dark line through the circle represents the square we drew. When viewing the
-            sphere off to the side like this, we shouldn't be able to see the left-edge of the
+            sphere off to the side like this, we should not be able to see the left-edge of the
             sphere facing perpendicular to the camera. And we should see some of the sphere on the
             right that is behind the plane.</para>
         <para>So how do we solve this?</para>
             </informalfigure>
             <para>The black line represents the square we used originally. There is a portion to the
                 left of the projection that we should be able to see. However, with proper ray
-                tracing, it wouldn't fit onto the area of the radius-sized square.</para>
+                tracing, it would not fit onto the area of the radius-sized square.</para>
             <para>This means that we need to expand the size of the square. Rather than finding a
                 clever way to compute the exact extent of the sphere's area projected onto a square,
                 it's much easier to just make the square bigger. This is even moreso considering
                 </imageobject>
             </mediaobject>
         </figure>
-        <para>Hmm. Even though we've made it look like a mathematically perfect sphere, it doesn't
+        <para>Hmm. Even though we've made it look like a mathematically perfect sphere, it does not
             act like one to the depth buffer. As far as it is concerned, it's just a circle
             (remember: <literal>discard</literal> prevents depth writes and tests as well).</para>
         <para>Is that the end for our impostors? Hardly.</para>
                         could add the following statement to any fragment shader that uses the
                         default depth value:</para>
                     <programlisting language="glsl">gl_FragDepth = gl_FragCoord.z</programlisting>
-                    <para>This is, in terms of behavior a noop; it does nothing OpenGL wouldn't have
+                    <para>This is, in terms of behavior a noop; it does nothing OpenGL would not have
                         done itself. However, in terms of <emphasis>performance</emphasis>, this is
                         a drastic change.</para>
-                    <para>The reason fragment shaders aren't required to have this line in all of
+                    <para>The reason fragment shaders are not required to have this line in all of
                         them is to allow for certain optimizations. If the OpenGL driver can see
                         that you do not set <varname>gl_FragDepth</varname> anywhere in the fragment
                         shader, then it can dramatically improve performance in certain
                         what conditional branches your shader uses. The value is not initialized to
                         a default; you either always write to it or never mention
                                 <quote><varname>gl_FragDepth</varname></quote> in your fragment
-                        shader at all. Obviously, you don't always have to write the same value; you
+                        shader at all. Obviously, you do not always have to write the same value; you
                         can conditionally write different values. But you cannot write something in
                         one path and not write something in another. Initialize it explicitly with
                             <varname>gl_FragCoord.z</varname> if you want to do something like
                 thing is an entirely new programmatic shader stage: <glossterm>geometry
                     shaders</glossterm>.</para>
             <para>Our initial pipeline discussion ignored this shader stage, because it is an
-                entirely optional part of the pipeline. If a program object doesn't contain a
+                entirely optional part of the pipeline. If a program object does not contain a
                 geometry shader, then OpenGL just does its normal stuff.</para>
-            <para>The most confusing thing about geometry shaders is that they don't shade geometry.
+            <para>The most confusing thing about geometry shaders is that they do not shade geometry.
                 Vertex shaders take a vertex as input and write a vertex as output. Fragment shader
                 take a fragment as input and write a fragment as output.</para>
             <para>Geometry shaders take a <emphasis>primitive</emphasis> as input and write one or
                     <literal>GL_POINTS.</literal> Recall that multiple primitives can have the same
                 base type. <literal>GL_TRIANGLE_STRIP</literal> and <literal>GL_TRIANGLES</literal>
                 are both separate primitives, but both generate triangles.
-                    <literal>GL_POINTS</literal> doesn't generate triangle primitives; it generates
+                    <literal>GL_POINTS</literal> does not generate triangle primitives; it generates
                 point primitives.</para>
             <para><literal>GL_POINTS</literal> interprets each individual vertex as a separate point
                 primitive. There are no other forms of point primitives, because points only contain
                 outputs using the interface name (<varname>outData</varname>, in this case). This
                 allows us to use the same names for inputs as we do for their corresponding outputs.
                 They do have other virtues, as we will soon see.</para>
-            <para>Do note that this vertex shader doesn't write to <varname>gl_Position.</varname>
+            <para>Do note that this vertex shader does not write to <varname>gl_Position.</varname>
                 That is not necessary when a vertex shader is paired with a geometry shader.</para>
             <para>Speaking of which, let's look at the global definitions of our geometry
                 shader.</para>
                 length 1, since point primitives have only one vertex. But this is still necessary
                 even in that case.</para>
             <para>We also have another output fragment block. This one matches the definition from
-                the fragment shader, as we will see a bit later. It doesn't have an instance name.
+                the fragment shader, as we will see a bit later. It does not have an instance name.
                 Also, note that several of the values use the <literal>flat</literal> qualifier. We
                 could have just used <literal>smooth</literal>, since we're passing the same values
                 for all of the triangles. However, it's more descriptive to use the
             <para>Do note that uniform blocks have a maximum size that is hardware-dependent. If we
                 wanted to have a large palette of materials, on the order of several thousand, then
                 we may exceed this limit. At that point, we would need an entirely new way to handle
-                this data. Once that we haven't learned about yet.</para>
+                this data. Once that we have not learned about yet.</para>
             <para>Or we could just split it up into multiple draw calls instead of one.</para>
         </section>
     </section>
                             of type <type>int</type>. If there is no geometry shader, then this
                             value will be the current count of primitives that was previously
                             rendered in this draw call. If there is a geometry shader, but it
-                            doesn't write to this value, then the value will be undefined.</para>
+                            does not write to this value, then the value will be undefined.</para>
                     </glossdef>
                 </glossentry>
                 <glossentry>

File Documents/Optimization.xml

                 framebuffers with a guarantee of not causing half of the displayed image to be from
                 one buffer and half from another. The latter eventuality is called
                     <glossterm>tearing</glossterm>, and having vsync enabled avoids that. However,
-                you don't care about tearing; you want to know about performance. So you need to
+                you do not care about tearing; you want to know about performance. So you need to
                 turn off any form of vsync.</para>
             <para>Vsync is controlled by the window-system specific extensions
                     <literal>GLX_EXT_swap_control</literal> and
             <section>
                 <title>CPU</title>
                 <para>A CPU bottleneck means that the GPU is being starved; it is consuming data
-                    faster than the CPU is providing it. You don't really test for CPU bottlenecks
+                    faster than the CPU is providing it. You do not really test for CPU bottlenecks
                     per-se; they are discovered by process of elimination. If nothing else is
                     bottlenecking the GPU, then the CPU clearly is not giving it enough stuff to
                     do.</para>
             <para>If there is some bottleneck that cannot be optimized away, then turn it to your
                 advantage. If you have a CPU bottleneck, then render more detailed models. If you
                 have a vertex-shader bottleneck, improve your lighting by adding some
-                fragment-shader complexity. And so forth. Just make sure that you don't increase
+                fragment-shader complexity. And so forth. Just make sure that you do not increase
                 complexity to the point where you move the bottleneck.</para>
         </section>
     </section>
         <section>
             <title>Object Culling</title>
             <para>The fastest object is one not drawn. And there's no point in drawing something
-                that isn't seen.</para>
+                that is not seen.</para>
             <para>The simplest form of object culling is frustum culling: choosing not to render
                 objects that are entirely outside of the view frustum. Determining that an object is
                 off screen is a CPU task. You generally have to represent each object as a sphere or
             <programlisting language="cpp">glVertexAttribPointer(index, 3, GLushort, GLtrue, 0, offset);</programlisting>
             <para>There are also a few specialized formats. <literal>GL_HALF_FLOAT</literal> can be
                 used for 16-bit floating-point types. This is useful for when you need values
-                outside of [-1, 1], but don't need the full </para>
+                outside of [-1, 1], but do not need the full </para>
             <para>Non-normalized integers can be used as well. These map in GLSL directly to
                 floating-point values, so a non-normalized value of 16 maps to a GLSL value of
                 16.0.</para>
             <para>There are certain gotchas when deciding how data gets packed like this. First, it
                 is a good idea to keep every attribute on a 4-byte alignment. This may mean
                 introducing explicit padding (empty space) into your structures. Some hardware will
-                have massive slowdowns if things aren't aligned to four bytes.</para>
+                have massive slowdowns if things are not aligned to four bytes.</para>
             <para>Next, it is a good idea to keep the size of any interleaved vertex data restricted
                 to multiples of 32 bytes in size. Violating this is not as bad as violating the
                 4-byte alignment rule, but one can sometimes get sub-optimal performance if the

File Documents/Outline.xml

                     <para>glDrawElementsBaseVertex, as an optimization.</para>
                 </listitem>
                 <listitem>
-                    <para>Depth buffers. How to use them to hide surfaces. Don't forget about
+                    <para>Depth buffers. How to use them to hide surfaces. Do not forget about
                         glDepthRange and the depth portion of the viewport transform.</para>
                 </listitem>
                 <listitem>
         <section>
             <title>Selecting the Masses</title>
             <para>This tutorial creates a number of entities that all move around, on pre-defined
-                paths, over a surface of bumpy terrain. They don't interact with the terrain. We
+                paths, over a surface of bumpy terrain. They do not interact with the terrain. We
                 then render projected selection circles onto the ground beneath them.</para>
             <para>Concepts:</para>
             <itemizedlist>
                         as much room/performance.</para>
                 </listitem>
                 <listitem>
-                    <para>Floating-point render targets. Don't forget the hardware
+                    <para>Floating-point render targets. Do not forget the hardware
                         limitations.</para>
                 </listitem>
                 <listitem>

File Documents/Positioning/Tutorial 03.xml

             parameter. Thus, it returns a value on the range [0, <varname>fLoopDuration</varname>),
             which is what we need to create a periodically repeating pattern.</para>
         <para>The <function>cosf</function> and <function>sinf</function> functions compute the
-            cosine and sine respectively. It isn't important to know exactly how these functions
+            cosine and sine respectively. It is not important to know exactly how these functions
             work, but they effectively compute a circle of diameter 2. By multiplying by 0.5f, it
             shrinks the circle down to a circle with a diameter of 1.</para>
         <para>Once the offsets are computed, the offsets have to be added to the vertex data. This
         <programlisting language="cpp">glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STREAM_DRAW);</programlisting>
         <para>GL_STATIC_DRAW tells OpenGL that you intend to only set the data in this buffer object
             once. GL_STREAM_DRAW tells OpenGL that you intend to set this data constantly, generally
-            once per frame. These parameters don't mean <emphasis>anything</emphasis> with regard to
+            once per frame. These parameters do not mean <emphasis>anything</emphasis> with regard to
             the API; they are simply hints to the OpenGL implementation. Proper use of these hints
             can be crucial for getting good buffer object performance when making frequent changes.
             We will see more of these hints later.</para>
             this way means having to copy millions of vertices from the original vertex data, add an
             offset to each of them, and then upload that data to an OpenGL buffer object. And all of
             that is <emphasis>before</emphasis> rendering. Clearly there must be a better way; games
-            can't possibly do this every frame and still hold decent framerates.</para>
+            can not possibly do this every frame and still hold decent framerates.</para>
         <para>Actually for quite some time, they did. In the pre-GeForce 256 days, that was how all
             games worked. Graphics hardware just took a list of vertices in clip space and rasterized them into fragments and pixels. Granted, in those days,
             we were talking about maybe 10,000 triangles per frame. And while CPUs have come a long
-            way since then, they haven't scaled with the complexity of graphics scenes.</para>
+            way since then, they have not scaled with the complexity of graphics scenes.</para>
         <para>The GeForce 256 (note: not a GT 2xx card, but the very first GeForce card) was the
             first graphics card that actually did some from of vertex processing. It could store
             vertices in GPU memory, read them, do some kind of transformation on them, and then send
         <para>It's all well and good that we are no longer having to transform vertices manually.
             But perhaps we can move more things to the vertex shader. Could it be possible to move
             all of <function>ComputePositionOffsets</function> to the vertex shader?</para>
-        <para>Well, no. The call to <function>glutGet(GL_ELAPSED_TIME)</function> can't be moved
+        <para>Well, no. The call to <function>glutGet(GL_ELAPSED_TIME)</function> cannot be moved
             there, since GLSL code cannot directly call C/C++ functions. But everything else can be
             moved. This is what <filename>vertCalcOffset.cpp</filename> does.</para>
         <para>The vertex program is found in <filename>data\calcOffset.vert</filename>.</para>
     glutPostRedisplay();
 }</programlisting>
         </example>
-        <para>This time, we don't need any code to use the elapsed time; we simply pass it
+        <para>This time, we do not need any code to use the elapsed time; we simply pass it
             unmodified to the shader.</para>
         <para>You may be wondering exactly how it is that the <varname>loopDuration</varname>
             uniform gets set. This is done in our shader initialization routine, and it is done only
             <para>Variables at global scope in GLSL can be defined with certain storage qualifiers:
                     <literal>const</literal>, <literal>uniform</literal>, <literal>in</literal>, and
                     <literal>out</literal>. A <literal>const</literal> value works like it does in
-                C99 and C++: the value doesn't change, period. It must have an initializer. An
+                C99 and C++: the value does not change, period. It must have an initializer. An
                 unqualified variable works like one would expect in C/C++; it is a global value that
                 can be changed. GLSL shaders can call functions, and globals can be shared between
                 functions. However, unlike <literal>in</literal>, <literal>out</literal>, and
             do the transform, and put as much as possible in the vertex shader and only have the CPU
             provide the most basic parameters. Which is the best to use?</para>
         <para>This is not an easy question to answer. However, it is almost always the case that CPU
-            transformations will be slower than doing it on the GPU. The only time it won't be is if
+            transformations will be slower than doing it on the GPU. The only time it will not be is if
             you need to do the exact same transformations many times within the same frame. And even
             then, it is better to do the transformations once on the GPU and save the result of that
             in a buffer object that you will pull from later. This is called transform feedback, and
             a time parameter, and for every vertex, the GPU must compute the <emphasis>exact
                 same</emphasis> offset. This means that the vertex shader is doing a lot of work
             that all comes out to the same number.</para>
-        <para>Even so, that doesn't mean it's always slower. What matters is the overhead of
+        <para>Even so, that does not mean it's always slower. What matters is the overhead of
             changing data. Changing a uniform takes time; changing a vector uniform typically takes
             no more time than changing a single float, due to the way that many cards handle
             floating-point math. The question is this: what is the cost of doing more complex
             operations in a vertex shader vs. how often those operations need to be done.</para>
         <para>The second vertex shader we use, the one that computes the offset itself, does a lot
             of complex math. Sine and cosine values are not particularly fast to compute. They
-            require quite a few computations to calculate. And since the offset itself doesn't
+            require quite a few computations to calculate. And since the offset itself does not
             change for each vertex in a single rendering call, performance-wise it would be best to
             compute the offset on the CPU and pass the offset as a uniform value.</para>
         <para>And typically, that is how rendering is done much of the time. Vertex shaders are
                     <glossterm>glGetUniformLocation</glossterm>
                     <glossdef>
                         <para>This function retrieves the location of a uniform of the given name
-                            from the given program object. If that uniform does not exist or wasn't
+                            from the given program object. If that uniform does not exist or was not
                             considered in use by GLSL, then this function returns -1, which is not a
                             valid uniform location.</para>
                     </glossdef>

File Documents/Positioning/Tutorial 04.xml

             <para>Now, move it to the right and up, similar to where the square is. You should be
                 able to see the bottom and left side of the remote.</para>
             <para>So we should be able to see the bottom and left side of our rectangular prism. But
-                we can't. Why not?</para>
+                we cannot. Why not?</para>
             <para>Think back to how rendering happens. In clip-space, the vertices of the back end
                 of the rectangular prism are directly behind the front end. And when we transform
                 these into window coordinates, the back vertices are still directly behind the front
                 the following rules:</para>
             <itemizedlist>
                 <listitem>
-                    <para>You cannot select components that aren't in the source vector. So if you
+                    <para>You cannot select components that are not in the source vector. So if you
                         have:</para>
                     <programlisting language="glsl">vec2 theVec;</programlisting>
                     <para>You cannot do <literal>theVec.zz</literal> because it has no Z component.</para>
                 follows:</para>
             <programlisting language="glsl">clipPos.x = cameraPos.x * frustumScale;
 clipPos.y = cameraPos.y * frustumScale;</programlisting>
-            <para>But it probably wouldn't be as fast as the swizzle and vector math version.</para>
+            <para>But it probably would not be as fast as the swizzle and vector math version.</para>
         </section>
     </section>
     <section xml:id="Tut04_matrix">
             </mediaobject>
         </equation>
         <para>The odd spacing is intentional. For laughs, let's add a bunch of meaningless terms
-            that don't change the equation, but starts to develop an interesting pattern:</para>
+            that do not change the equation, but starts to develop an interesting pattern:</para>
         <equation>
             <title>Camera to Clip Expanded Equations</title>
             <mediaobject>
             Because each of the multiplications are independent of each other, they could all be
             done simultaneously, which is exactly the kind of thing graphics hardware does fast.
             Similarly, the addition operations are partially independent; each row's summation
-            doesn't depend on the values from any other row. Ultimately, vector-matrix
+            does not depend on the values from any other row. Ultimately, vector-matrix
             multiplication usually generates only 4 instructions in the GPU's machine
             language.</para>
         <para>We can re-implement the above perspective projection using matrix math rather than
                         matrices together can only be performed if the number of rows in the matrix
                         on the left is equal to the number of columns in the matrix on the right.
                         For this reason, among others, matrix multiplication is not commutative (A*B is not B*A;
-                        sometimes B*A isn't even possible).</para>
+                        sometimes B*A is not even possible).</para>
                     <para>4x4 matrices are used in computer graphics to transform 3 or 4-dimensional
                         vectors from one space to another. Most kinds of linear transforms can be
                         represented with 4x4 matrices.</para>

File Documents/Positioning/Tutorial 05.xml

                 you can create multiple objects with one call. As before, the objects are
                     <type>GLuint</type>s.</para>
             <para>VAOs are bound to the context with <function>glBindVertexArray</function>; this
-                function doesn't take a target the way that <function>glBindBuffer</function> does.
+                function does not take a target the way that <function>glBindBuffer</function> does.
                 It only takes the VAO to bind to the context.</para>
             <para>Once the VAO is bound, calls to certain functions change the data in the bound
                 VAO. Technically, they <emphasis>always</emphasis> have changed the VAO's state; all
 glBindVertexArray(vao);]]></programlisting>
             <para>This creates a single VAO, which contains the vertex array state that we have been
                 setting. This means that we have been changing the state of a VAO in all of the
-                tutorials. We just didn't talk about it at the time.</para>
+                tutorials. We just did not talk about it at the time.</para>
             <para>The following functions change VAO state. Therefore, if no VAO is bound to the
                 context (if you call <function>glBindVertexArray(0)</function> or you do not bind a
                 VAO at all), all of these functions, except as noted, will fail.</para>
             <itemizedlist>
                 <listitem>
                     <para><function>glVertexAttribPointer</function>. Also
-                            <function>glVertexAttribIPointer</function>, but we haven't talked about
+                            <function>glVertexAttribIPointer</function>, but we have not talked about
                         that one yet.</para>
                 </listitem>
                 <listitem>
                     call; it will affect <emphasis>nothing</emphasis> in the final rendering. So
                     VAOs do store which buffer objects are associated with which attributes; but
                     they do not store the <literal>GL_ARRAY_BUFFER</literal> binding itself.</para>
-                <para>If you want to know why <function>glVertexAttribPointer</function> doesn't
+                <para>If you want to know why <function>glVertexAttribPointer</function> does not
                     simply take a buffer object rather than requiring this bind+call mechanism, it
                     is again because of legacy API cruft. When buffer objects were first introduced,
                     they were designed to impact the API as little as possible. So the old
                     corner positions that are shared between two triangles that have different
                     colors will still have to be duplicated in different vertices.</para>
                 <para>It turns out that, for most meshes, duplication of this sort is fairly rare.
-                    Most meshes are smooth across their surface, so different attributes don't
+                    Most meshes are smooth across their surface, so different attributes do not
                     generally pop from location to location. Shared edges typically use the same
                     attributes for both triangles along the edges. The simple cubes and the like
                     that we use are one of the few cases where a per-attribute index would have a
             <note>
                 <para>If you look at the vertex position attribute and the shader, you will notice
                     that we now use a 3-component position vector rather than a 4-component one.
-                    This saves on data, yet our matrix math shouldn't work, since you cannot
+                    This saves on data, yet our matrix math should not work, since you cannot
                     multiply a 4x4 matrix with a 3-component vector.</para>
                 <para>This is a subtle feature of OpenGL. If you attempt to transform a matrix by a
                     vector that is one size smaller than the matrix, it will assume that the last
             most distant objects first. This is called <glossterm>depth sorting.</glossterm> As you
             might imagine, this <quote>solution</quote> scales incredibly poorly. Doing it for each
             triangle is prohibitive, particularly with scenes with millions of triangles.</para>
-        <para>And the worst part is that even if you put in all the effort, it doesn't actually
+        <para>And the worst part is that even if you put in all the effort, it does not actually
             work. Not all the time at any rate. Many trivial cases can be solved via depth sorting,
             but non-trivial cases have real problems. You can have an arrangement of 3 triangles
             where each overlaps the other, such that there simply is no order you can render them in
         <para>Even worse, it does nothing for inter-penetrating triangles; that is, triangles that
             pass through each other in 3D space (as opposed to just from the perspective of the
             camera).</para>
-        <para>Depth sorting isn't going to cut it; clearly, we need something better.</para>
+        <para>Depth sorting is not going to cut it; clearly, we need something better.</para>
         <para>One solution might be to tag fragments with the distance from the viewer. Then, if a
             fragment that is about to be written has a farther distance (ie: the fragment is behind
             what was already drawn), we simply do not write that fragment to the output image. That
                 and zFar is 1.0, NDC values of -1 will map to 0.5 and values of 1 will result in
                 1.0.</para>
             <note>
-                <para>Don't confuse the range zNear/zFar with the <emphasis>camera</emphasis>
+                <para>Do not confuse the range zNear/zFar with the <emphasis>camera</emphasis>
                     zNear/zFar used in the perspective projection matrix computation.</para>
             </note>
             <para>The range zNear can be greater than the range zFar; if it is, then the
                 is not sufficient to actually write depth values. This allows us to have depth
                 testing for objects where their <emphasis>own</emphasis> depth (the incoming
                 fragment's depth) is not written to the depth buffer, even when their color outputs
-                are written. We don't use this here, but a special algorithm might need this
+                are written. We do not use this here, but a special algorithm might need this
                 feature.</para>
             <note>
                 <para>Due to an odd quirk of OpenGL, writing to the depth buffer is always inactive
             ever exactly 0, then we have a problem.</para>
         <para>This is where clip-space comes in to save the day. See, until we actually
                 <emphasis>do</emphasis> the divide, everything is fine. A 4-dimensional vector that
-            will be divided by the fourth component but hasn't <emphasis>yet</emphasis> is still
+            will be divided by the fourth component but has not <emphasis>yet</emphasis> is still
             valid, even if the fourth component is zero. This kind of coordinate system is called a
                 <glossterm>homogeneous coordinate system</glossterm>. It is a way of talking about
             things that you could not talk about in a normal, 3D coordinate system. Like dividing by
         <para>Since the vertex coordinate is not going to be visible anyway, why bother drawing it
             and dividing by that pesky 0? Well, because that vertex happens to be part of a
             triangle, and if part of the triangle is visible, we have to draw it.</para>
-        <para>But we don't have to draw <emphasis>all</emphasis> of it.</para>
+        <para>But we do not have to draw <emphasis>all</emphasis> of it.</para>
         <para><glossterm>Clipping</glossterm> is the process of taking a triangle and breaking it up
             into smaller triangles, such that only the part of the original triangle that is within
             the viewable region remains. This may generate only one triangle, or it may generate
             vertex shader's interpolation qualifiers) to determine the relative value of the
             post-clipping vertex.</para>
         <para>As you might have guessed, clipping happens in <emphasis>clip space</emphasis>, not
-            NDC space. Hence the name. Since clip-space is a homogeneous coordinate system, we don't
+            NDC space. Hence the name. Since clip-space is a homogeneous coordinate system, we do not
             have to worry about those pesky zeros. Unfortunately, because homogeneous spaces are not
-            easy to draw, we can't show you what it would look like. But we can show you what it
+            easy to draw, we cannot show you what it would look like. But we can show you what it
             would look like if you clipped in camera space, in 2D:</para>
         <figure>
             <title>Triangle Clipping</title>
             <para>We have phrased the discussion of clipping as a way to avoid dividing by zero for
                 good reason. The OpenGL specification states that clipping must be done against all
                 sides of the viewable region. And it certainly appears that way; if you move objects
-                far enough away that they overlap with zFar, then you won't see the objects.</para>
+                far enough away that they overlap with zFar, then you will not see the objects.</para>
             <para>You can also see apparent clipping with objects against the four sides of the view
                 frustum. To see this, you would need to modify the viewport with
                     <function>glViewport</function>, so that only part of the window is being
                 The simple act of turning one triangle into several is hard and time
                 consuming.</para>
             <para>So, if OpenGL states that this must happen, but supposedly OpenGL-compliant
-                hardware doesn't do it, then what's going on?</para>
-            <para>Consider this: if we hadn't told you just now that the hardware doesn't do
+                hardware does not do it, then what's going on?</para>
+            <para>Consider this: if we had not told you just now that the hardware does not do
                 clipping most of the time, could you tell? No. And that's the point: OpenGL
-                specifies <emphasis>apparent</emphasis> behavior; the spec doesn't care if you
-                actually do vertex clipping or not. All the spec cares about is that the user can't
+                specifies <emphasis>apparent</emphasis> behavior; the spec does not care if you
+                actually do vertex clipping or not. All the spec cares about is that the user cannot
                 tell the difference in terms of the output.</para>
             <para>That's how hardware can get away with the early-z optimization mentioned before;
                 the OpenGL spec says that the depth test must happen after the fragment program
-                executes. But if the fragment shader doesn't modify the depth, then would you be
+                executes. But if the fragment shader does not modify the depth, then would you be
                 able to tell the difference if it did the depth test before the fragment shader? No;
                 if it passes, it would have passed either way, and the same goes for failing.</para>
             <para>Instead of clipping, the hardware usually just lets the triangles go through if
                 part of the triangle is within the visible region. It generates fragments from those
                 triangles, and if a fragment is outside of the visible window, it is discarded
                 before any fragment processing takes place.</para>
-            <para>Hardware usually can't do this however, if any vertex of the triangle has a
+            <para>Hardware usually cannot do this however, if any vertex of the triangle has a
                 clip-space W &lt;= zero. In terms of a perspective projection, this means that part
                 of the triangle is fully behind the eye, rather than just behind the camera zNear
                 plane. In these cases, clipping is much more likely to happen.</para>
             <para>Even so, clipping only happens if the triangle is partially visible; a triangle
                 that is entirely in front of the zNear plane is dropped entirely.</para>
             <para>In general, you should try to avoid rendering things that will clip against the
-                eye plane (clip-space W &lt;= 0, or camera-space Z >= 0). You don't need to be
+                eye plane (clip-space W &lt;= 0, or camera-space Z >= 0). You do not need to be
                 pedantic about it; long walls and the like are fine. But, particularly for low-end
                 hardware, a lot of clipping can really kill performance.</para>
         </sidebar>
                     <quote><emphasis>this is fake!</emphasis></quote> as loud as possible to the
             user. What can we do about this?</para>
         <para>The most common technique is to simply not allow it. That is, know how close objects
-            are getting to the near clipping plane (ie: the camera) and don't let them get close
+            are getting to the near clipping plane (ie: the camera) and do not let them get close
             enough to clip.</para>
-        <para>And while this can <quote>function</quote> as a solution, it isn't exactly good. It
+        <para>And while this can <quote>function</quote> as a solution, it is not exactly good. It
             limits what you can do with objects and so forth.</para>
         <para>A more reasonable mechanism is <glossterm>depth clamping</glossterm>. What this does
             is turn off camera near/far plane clipping altogether. Instead, the depth for these
                 near and far planes. If you're wondering what happens when you have depth clamping,
                 which turns off clipping, and a clip-space W &lt;= 0, it's simple. In camera space,
                 near and far clipping is represented as turning a pyramid into a frustum: cutting
-                off the top and bottom. If near/far clipping isn't active, then the frustum becomes
+                off the top and bottom. If near/far clipping is not active, then the frustum becomes
                 a pyramid. The other 4 clipping planes are still fully in effect. Clip-space
                 vertices with a W of less than 0 are all outside of the boundary of any of the other
                 four clipping planes.</para>
                 again: 45%. 10 inches may seem like a lot, but that's still less than a foot away
                 from the eye. Depending on what you are rendering, this may be a perfectly
                 legitimate trade-off.</para>
-            <para>What this teaches us is that the absolute numbers don't matter: it is the ratio of
+            <para>What this teaches us is that the absolute numbers do not matter: it is the ratio of
                 zNear to zFar that dictates where you lose precision. 0.1:1000 is just as bad as
                 1:10,000. So push the zNear distance forward as far as you can. What happens if you
                 push it too far? That's the next section.</para>
                     object, imagine how bad it will look if it's up in your face. Even if you could
                     make the z-values linear, it could cause problems in near objects.</para>
                 <para>Fifth, if you really need a camera range this large, you can play some tricks
-                    with the depth range. But only do this if you actually do get z-fighting; don't
+                    with the depth range. But only do this if you actually do get z-fighting; do not
                     simply do it because you have a large camera range.</para>
                 <para>The camera range defines how the perspective matrix transforms the Z to
                     clip-space and therefore NDC space. The <emphasis>depth</emphasis> range defines
                     So you can draw the front half of your scene into the [0, 0.5] depth range with
                     a camera range like [-1, -2,000,000]. Then, you can draw the back half of the
                     scene in the [0.5, 1] depth range, with a camera range of [-2,000,000,
-                    -4,000,000]. Dividing it in half like this isn't very fair to your front
+                    -4,000,000]. Dividing it in half like this is not very fair to your front
                     objects, so it's more likely that you would want to use something like [-1,
                     -10,000] for the front half and [-10,000, -4,000,000] for the second. Each of
                     these would still map to half of the depth range.</para>
             </listitem>
             <listitem>
                 <para>Clipping holes can be repaired to a degree by activating depth clamping, so long as there
-                    is no overlap. And as long as the triangles don't extend beyond 0 in camera space.</para>
+                    is no overlap. And as long as the triangles do not extend beyond 0 in camera space.</para>
             </listitem>
 			<!--TODO: Reinstate this.-->
 			<!--
                 <glossterm>z-fighting</glossterm>
                 <glossdef>
                     <para>Happens when the window-space Z values for two surfaces are sufficiently
-                        close together that part of one shows through a surface that it shouldn't.
+                        close together that part of one shows through a surface that it should not.
                         This is usually due to a lack of depth buffer precision. The common remedy
                         is to try to move the camera zNear further from 0.</para>
                 </glossdef>

File Documents/Positioning/Tutorial 06.xml

             multiplication causes the value in the W column to be multiplied by the W coordinate of
             the vector (which is 1) and added to the sum of the other terms.</para>
         <para>But how do we keep the matrix from doing something to the other terms? We only want
-            this matrix to apply an offset to the position. We don't want to have it modify the
+            this matrix to apply an offset to the position. We do not want to have it modify the
             position in some other way.</para>
         <para>This is done by modifying an <glossterm>identity matrix.</glossterm> An identity
             matrix is a matrix that, when performing matrix multiplication, will return the matrix
             </mediaobject>
         </equation>
         <para>You may start to see a pattern emerging here, something that begins to suggest why
-            matrices are very, very useful. I won't spoil it for you yet though.</para>
+            matrices are very, very useful. I will not spoil it for you yet though.</para>
         <para>The tutorial project <phrase role="propername">Scale</phrase> will display 5 objects
             at various scales. The objects are all at the same Z distance from the camera, so the
             only size difference between the objects is the scale effects applied to them. The
                 </imageobject>
             </mediaobject>
         </informalequation>
-        <para>Doesn't this look a bit familiar? No? Maybe this look at vector-matrix multiplication
+        <para>Does not this look a bit familiar? No? Maybe this look at vector-matrix multiplication
             will jog your memory:</para>
         <informalequation>
             <mediaobject>
                 that the full transformation matrix is relative to the space of its parent, not
                 camera space.</para>
             <para>So if you have a node who's translation is (3, 0, 4), then it will be 3 X-units
-                and 4 Z-units from the origin of its parent transform. The node itself doesn't know
+                and 4 Z-units from the origin of its parent transform. The node itself does not know
                 or care what the parent transform actually is; it simply stores a transform relative
                 to that.</para>
             <para>Technically, a node does not have to have a mesh. It is sometimes useful in a
                 more buffer object room and more vertex attribute changes when these were simply
                 unnecessary. The vertex shader runs no slower this way; it's still just multiplying
                 by matrices. And the minor CPU computation time is exactly that: minor.</para>
-            <para>Mesh space is very useful, even though it isn't commonly talked about to the point
+            <para>Mesh space is very useful, even though it is not commonly talked about to the point
                 where it gets a special name. As we have seen, it allows easy model reusage, but it
                 has other properties as well. For example, it can be good for data compression. As
                 we will see in later tutorials, there are ways to store values on the range [0, 1]
                     should feel free to do whatever they want to the current matrix, as well as
                     push/pop as much as they want. However, the callee <emphasis>must</emphasis> not
                     pop more than they push, though this is a general requirement with any function
-                    taking a matrix stack. After all, a stack doesn't report how many elements it
-                    has, so you can't know whether someone pushed anything at all.</para>
+                    taking a matrix stack. After all, a stack does not report how many elements it
+                    has, so you cannot know whether someone pushed anything at all.</para>
                 <para>In callee-save, what the convention is saying is that a function must be
                     responsible for any changes it wants to make to the matrix stack. If it wants to
                     change the matrix stack, then it must push first and pop after using those

File Documents/Positioning/Tutorial 07.xml

                 regardless of any other circumstance. You can declare the uniforms with the same
                 name, with the same types, in the same order, but OpenGL will not
                     <emphasis>guarantee</emphasis> that you get the same uniform locations. It
-                doesn't even guarantee that you get the same uniform locations on different
+                does not even guarantee that you get the same uniform locations on different
                 run-through of the same executable.</para>
             <para>This means that uniform locations are local to a program object. Uniform data is
                 also local to an object. For example:</para>
                 <para>If a vertex shader takes attributes that the VAO does not provide, then the
                     value the vertex shader gets will be a vector of (0, 0, 0, 1). If the vertex
                     shader input vector has fewer than 4 elements, then it fills them in in that
-                    order. A vec3 input that isn't provided by the VAO will be (0, 0, 0).</para>
+                    order. A vec3 input that is not provided by the VAO will be (0, 0, 0).</para>
                 <para>Speaking of which, if a VAO provides more components of an attribute vector
                     than the vertex shader expects (the VAO provides 4 elements, but the vertex
                     shader input is a vec2), then the vertex shader input will be filled in as much
-                    as it can be. If the reverse is true, if the VAO doesn't provide enough
+                    as it can be. If the reverse is true, if the VAO does not provide enough
                     components of the vector, then the unfilled values are always filled in from the
                     (0, 0, 0, 1) vector.</para>
             </sidebar>
                 world-space that should be considered <quote>up</quote> based on where the camera is
                 looking.</para>
             <para>The implementation of this function is less simple. It does a lot of complex
-                geometric computations. I won't go into detail explaining how it work, but there is
+                geometric computations. I will not go into detail explaining how it work, but there is
                 one thing you need to know.</para>
             <para>It is very important that the <quote>up</quote> direction is not along the same
                 line as the direction from the camera position to the look at target. If up is very
                 close to that direction then the generated matrix is no longer valid, and unpleasant
                 things will happen.</para>
-            <para>Since it doesn't make physical sense for <quote>up</quote> to be directly behind
+            <para>Since it does not make physical sense for <quote>up</quote> to be directly behind
                 or in front of the viewer, it makes a degree of sense that this would likewise
                 produce a nonsensical matrix. This problem usually crops up in camera systems like
                 the one devised here, where the camera is facing a certain point and is rotating
                 elements in the uniform block. This allows different hardware to position elements
                 where it is most efficient for them. Some shader hardware can place 2
                     <type>vec3</type>'s directly adjacent to one another, so that they only take up
-                6 floats. Other hardware can't handle that, and must pad each <type>vec3</type> out
+                6 floats. Other hardware cannot handle that, and must pad each <type>vec3</type> out
                 to 4 floats.</para>
             <para>Normally, this would mean that, in order to set any values into the buffer object,
                 you would have to query the program object for the byte offsets for each element in
                 set a value to a location that is -1, but no data will actually be set.</para>
             <para>If a uniform block is marked with the <quote>std140</quote> layout, then the
                 ability to disable uniforms in a block is entirely removed. All uniforms must have
-                storage, even if this particular program doesn't use them. This means that, as long
+                storage, even if this particular program does not use them. This means that, as long
                 as you declare the same uniforms in the same order within a block, the storage for
                 that uniform block will have the same layout in <emphasis>any</emphasis> program.
                 This means that multiple different programs can use the same uniform buffer.</para>
         <section>
             <title>Uniform Block Indices</title>
             <para>Uniforms inside a uniform block do not have individual uniform locations. After
-                all, they don't have storage within a program object; their data comes from a buffer
+                all, they do not have storage within a program object; their data comes from a buffer
                 object.</para>
             <para>So instead of calling glGetUniformLocation, we have a new function.</para>
             <programlisting language="cpp">data.globalUniformBlockIndex =
             <para><literal>GL_UNIFORM_BUFFER</literal> does not really have an intrinsic meaning
                 like these other two. Having something bound to this binding means nothing as far as
                 any other function of OpenGL is concerned. Oh, you can call buffer object functions
-                on it, like glBufferData as above. But it doesn't have any other role to play in
+                on it, like glBufferData as above. But it does not have any other role to play in
                 rendering. The main reason to use it is to preserve the contents of more useful
                 binding points. It also communicates to someone reading your code that this buffer
                 object is going to be used to store uniform data.</para>
             <para>In the <phrase role="propername">World Space</phrase> example, we drew the
                 camera's look-at target directly in camera space, bypassing the world-to-camera
                 matrix. Doing that with uniform buffers would be harder, since we would have to set
-                the uniform buffer value twice in the same draw call. This isn't particularly
+                the uniform buffer value twice in the same draw call. This is not particularly
                 difficult, but it could be a drain on performance.</para>
             <para>Instead, we just use the camera's target position to compute a model-to-world
                 matrix that always positions the object at the target point.</para>
 glUseProgram(0);
 glEnable(GL_DEPTH_TEST);</programlisting>
             </example>
-            <para>We don't get the neat effect of having the object always face the camera though.
+            <para>We do not get the neat effect of having the object always face the camera though.
                 We still shut off the depth test, so that we can always see the object.</para>
         </section>
     </section>
             origin.</para>
         <para>This means that, if you <emphasis>combined</emphasis> the two matrices into one, you
             would have one matrix with a relatively small translation component. Therefore, you
-            wouldn't have a precision problem.</para>
+            would not have a precision problem.</para>
         <para>Now, 32-bit floats on the CPU are no more precise than on the GPU. However, on the CPU
             you are guaranteed to be able to do double-precision math. And while it is slower than
-            single-precision math, the CPU isn't doing as many computations. You aren't doing
+            single-precision math, the CPU is not doing as many computations. You are not doing
             vector/matrix multiplies per vertex; you're doing them per <emphasis>object</emphasis>.
             And since the final result would actually fit within 32-bit precision limitations, the
             solution is obvious.</para>

File Documents/Positioning/Tutorial 08.xml

     <section>
         <?dbhtml filename="Tut08 Gimbal Lock.html"?>
         <title>Gimbal Lock</title>
-        <para>Remember a few tutorials back, when we said that a rotation matrix isn't a rotation
+        <para>Remember a few tutorials back, when we said that a rotation matrix is not a rotation
             matrix at all, that it is an orientation matrix? We also said that forgetting this can
             come back to bite you. Well, here's likely the most common way.</para>
         <para>Normally, when dealing with orienting an object like a plane or spaceship in 3D space,
             at this picture.</para>
         <para>Given the controls of these gimbals, can you cause the object to pitch up and down?
             That is, move its nose up and down from where it is currently? Only slightly; you can
-            use the middle gimbal, which has a bit of pitch rotation. But that isn't much.</para>
-        <para>The reason we don't have as much freedom to orient the object is because the outer and
+            use the middle gimbal, which has a bit of pitch rotation. But that is not much.</para>
+        <para>The reason we do not have as much freedom to orient the object is because the outer and
             inner gimbals are now rotating about the <emphasis>same axis</emphasis>. Which means you
             really only have two gimbals to manipulate in order to orient the red gimbal. And 3D
             orientation cannot be fully controlled with only 2 axial rotations, with only 2
         <section>
             <title>Rendering</title>
             <para>Before we find a solution to the problem, let's review the code. Most of it is
-                nothing you haven't seen elsewhere, so this will be quick.</para>
+                nothing you have not seen elsewhere, so this will be quick.</para>
             <para>There is no explicit camera matrix set up for this example; it is too simple for
                 that. The three gimbals are loaded from mesh files as we saw in our last tutorial.
                 They are built to fit into the above array. The ship is also from a mesh
     <section>
         <?dbhtml filename="Tut08 Quaternions.html"?>
         <title>Quaternions</title>
-        <para>So gimbals, 3 accumulated axial rotations, don't really work very well for orienting
+        <para>So gimbals, 3 accumulated axial rotations, do not really work very well for orienting
             an object. How do we fix this problem?</para>
         <para>Part of the problem is that we are trying to store an orientation as a series of 3
             accumulated axial rotations. Orientations are <emphasis>orientations,</emphasis> not
             scale) and each axis is perpendicular to all of the others.</para>
         <para>Unfortunately, re-orthonormalizing a matrix is not a simple operation. You could try
             to normalize each of the axis vectors with typical vector normalization, but that
-            wouldn't ensure that the matrix was orthonormal. It would remove scaling, but the axes
-            wouldn't be guaranteed to be perpendicular.</para>
+            would not ensure that the matrix was orthonormal. It would remove scaling, but the axes
+            would not be guaranteed to be perpendicular.</para>
         <para>Orthonormalization is certainly possible. But there are better solutions. Such as using
             something called a <glossterm>quaternion.</glossterm></para>
         <para>A quaternion is (for the purposes of this conversation) a 4-dimensional vector that is
         <section>
             <title>Yaw Pitch Roll</title>
             <para>We implement this in the <phrase role="propername">Quaternion YPR</phrase>
-                tutorial. This tutorial doesn't show gimbals, but the same controls exist for yaw,
+                tutorial. This tutorial does not show gimbals, but the same controls exist for yaw,
                 pitch, and roll transformations. Here, pressing the <keycap>SpaceBar</keycap> will
                 switch between right-multiplying the YPR values to the current orientation and
                 left-multiplying them. Post-multiplication will apply the YPR transforms from
             </figure>
             <para>The <function>display</function> function only changed where needed to deal with
                 drawing the ground plane and to handle the camera. Either way, it's nothing that
-                hasn't been seen elsewhere.</para>
+                has not been seen elsewhere.</para>
             <para>The substantive changes were in the <function>OffsetOrientation</function>
                 function:</para>
             <example>
             interpolation between two orientations, so that we can watch an object move from one
             orientation to another. This would allow us to see an object smoothly moving from one
             orientation to another.</para>
-        <para>This is one more trick we can play with quaternions that we can't with matrices.
+        <para>This is one more trick we can play with quaternions that we cannot with matrices.
             Linearly-interpolating the components of matrices does not create anything that
             resembles an inbetween transformation. However, linearly interpolating a pair of
             quaternions does. As long as you normalize the results.</para>
             and the <keycap>Q</keycap> key is the initial orientation.</para>
         <para>We can see that there are some pretty reasonable looking transitions. The transition
             from <keycap>Q</keycap> to <keycap>W</keycap>, for example. However, there are some
-            other transitions that don't look so good; the <keycap>Q</keycap> to <keycap>E</keycap>
+            other transitions that do not look so good; the <keycap>Q</keycap> to <keycap>E</keycap>
             transition. What exactly is going on?</para>
         <section>
             <title>The Long Path</title>
                 vector part of the quaternion. If you negate all four components however, you get
                 something quite different: the same orientation as before. Negating a quaternion
                 does not affect its orientation.</para>
-            <para>While the two quaternions represent the same orientation, they aren't the same as
+            <para>While the two quaternions represent the same orientation, they are not the same as
                 far as interpolation is concerned. Consider a two-dimensional case:</para>
             <figure>
                 <title>Interpolation Directions</title>
                     through the code.</para>
             </note>
             <para>The <function>Vectorize</function> function simply takes a quaternion and returns
-                a <type>vec4</type>; this is necessary because GLM <type>fquat</type> don't support
+                a <type>vec4</type>; this is necessary because GLM <type>fquat</type> do not support
                 many of the operations that GLM <type>vec4</type>'s do. In this case, the
                     <type>glm::mix</type> function, which performs component-wise linear
                 interpolation.</para>
                 order for the first componet of Q to get to E's first component, it will have to go
                 through zero.</para>
             <para>When the alpha is around 0.5, half-way through the movement, the resultant vector
-                before normalization is very small. But the vector itself isn't what provides the
+                before normalization is very small. But the vector itself is not what provides the
                 orientation; the <emphasis>direction</emphasis> of the 4D vector is. Which is why it
                 moves very fast in the middle: the direction is changing rapidly.</para>
             <para>In order to get smooth interpolation, we need to interpolate based on the
                 <para>It's important to know what kind of problems slerp is intended to solve and
                     what kind it is not. Slerp becomes increasingly more important the more
                     disparate the two quaternions being interpolated are. If you know that two
-                    quaternions are always quite close to one another, then slerp isn't worth the
+                    quaternions are always quite close to one another, then slerp is not worth the
                     expense.</para>
                 <para>The <function>acos</function> call in the slerp code alone is pretty
                     substantial in terms of performance. Whereas lerp is typically just a

File Documents/Texturing.xml

             render a mathematically perfect representation of a sphere simply by rendering two
             triangles.</para>
         <para>All of this has been done without textures. Thus, the first lesson this book has to
-            teach you about textures is that they aren't <emphasis>that</emphasis> important. What
+            teach you about textures is that they are not <emphasis>that</emphasis> important. What
             you have learned is how to think about solving graphics problems without
             textures.</para>
         <para>Many graphics texts overemphasize the importance of textures; most of them introduce

File Documents/Texturing/Tutorial 14.xml

                         variables.</para>
                 </listitem>
                 <listitem>
-                    <para>Variables of sampler type can only be passed to user-defined functions
-                        that take samplers of the equivalent type, and to be passed to special
-                        built-in functions that take samplers of those types.</para>
+                    <para>Variables of sampler type can only be used as parameters to functions.
+                        User-defined functions can take them as parameters, and there are a number
+                        of built-in functions that take samplers.</para>
                 </listitem>
             </itemizedlist>
             <para>In the shader <filename>TextureGaussian.frag</filename>, we have an example of
                 values. A normalized texture coordinate is a texture coordinate where the coordinate
                 values range from [0, 1] refer to texel coordinates (the coordinates of the pixels
                 within the textures) to [0, texture-size].</para>
-            <para>What this means is that our texture coordinates don't have to care how big the
+            <para>What this means is that our texture coordinates do not have to care how big the
                 texture is. We can change the texture's size without changing anything about how we
                 compute the texture coordinate. A coordinate of 0.5 will always mean the middle of
                 the texture, regardless of the size of that texture.</para>
                 traditional <quote>xyzw</quote>.</para>
             <note>
                 <para>The reason for the odd naming is that OpenGL tries to keep suffixes from
-                    conflicting. <quote>uvw</quote> doesn't work because <quote>w</quote> is already
+                    conflicting. <quote>uvw</quote> does not work because <quote>w</quote> is already
                     part of the <quote>xyzw</quote> suffix. In GLSL, <quote>strq</quote> conflicts
                     with <quote>rgba</quote>, so they had to go with <quote>stpq</quote>
                     instead.</para>
             what it means to interpolate a value across a triangle.</para>
         <para>Thus far, we have more or less glossed over the details of interpolation. We expanded
             on this earlier when we explained why per-vertex lighting would not work for certain
-            kinds of functions, as well as when explaining why normals don't interpolate well. But
+            kinds of functions, as well as when explaining why normals do not interpolate well. But
             now that we want to associate vertices of a triangle with locations on a texture, we
             need to fully explain what interpolation means.</para>
         <para>The main topic is linearity. In the earlier discussions, it was stressed that
             </listitem>
             <listitem>
                 <para>Vertex or geometry shader outputs interpolated across polygons can be
-                    interpolated linearly in window space or linearly in camera space. The GLSL
-                    interpolation qualifiers control which kind of interpolation happens.</para>
+                    interpolated linearly in window space or linearly in pre-projection space. The
+                    GLSL interpolation qualifiers control which kind of interpolation
+                    happens.</para>
             </listitem>
             <listitem>
                 <para>Textures can be associated with points on a surface by giving those vertex
                 <listitem>
                     <para>If you look at the look-up table for our specular function, you will see
                         that much of it is very dark, if not actually at 0.0. Even when the dot
-                        product is close to 1.0, it doesn't take very far before the specular value
+                        product is close to 1.0, it does not take very far before the specular value
                         becomes negligible. One way to improve our look-up table without having to
                         use larger textures is to change how we index the texture. If we index the
                         texture by the square-root of the dot-product, then there will be more room