# Commits

committed 5976f71

Copyediting.

# Documents/Positioning/Tutorial 04.xml

`             orthographic.</para>`
`         <para>Human eyes do not see the world via orthographic projection. If they did, you would`
`             only be able to see an area of the world the size of your pupils. Because we do not use`
`-            orthographic projections (among other reasons) orthographic projections do not look`
`+            orthographic projections to see (among other reasons), orthographic projections do not look`
`             particularly real to us.</para>`
`         <para>Instead, we use a pinhole camera model for our eyesight. This model performs a`
`                 <glossterm>perspective projection</glossterm>. A perspective projection is a`
`         </figure>`
`         <para>As you can see, the projection is radial, based on the location of a particular point.`
`             That point is the eye of the projection.</para>`
`+			<!--TODO: Remove next paragraph-->`
`         <para>From this point forward, we are going to make a simplifying assumption. The position`
`             of the eye will be centered relative to the size of the surface of projection. This need`
`-            not always be the case, but it function well enough for most of our needs.</para>`
`+            not always be the case, but it functions well enough for most of our needs.</para>`
`         <para>Just from the shape of the projection, we can see that the perspective projection`
`             causes a larger field of geometry to be projected onto the surface. An orthographic`
`             projection only captures the rectangular prism directly in front of the surface of`
`         <section>`
`             <title>Mathematical Perspective</title>`
`             <para>Now that we know what we want to do, we just need to know how to do it.</para>`
`+			<!--TODO: Reformat next paragraph into a list of assumptions.-->`
`             <para>We will be making a few simplifying assumptions. In addition to the assumption`
`                 that the eye point is centered relative to the projection surface, we will also`
`                 assume that the plane of projection is axis aligned and is facing down the -Z axis.`
`                 direction.</para>`
`             <para>The problem is really just a simple geometry problem. Here is the equivalent form`
`                 in a 2D to 1D perspective projection.</para>`
`+				<!--TODO: This section is wrong. Pz needs to extend from E to P, not from P to the projection plane.-->`
`             <figure>`
`                 <title>2D to 1D Perspective Projection Diagram</title>`
`                 <mediaobject>`
`                 <para>You may be wondering why this arbitrary division-by-W step exists. You may`
`                     also be wondering, in this modern days of vertex shaders that can do vector`
`                     divisions very quickly, why we should bother to use the hardware division-by-W`
`-                    step at all. There are two reasons. One we will cover in just a bit when we deal`
`-                    with matrices; the main one will be covered in the next tutorial. Suffice it to`
`+                    step at all. There are several reasons. One we will cover in just a bit when we deal`
`+                    with matrices. More important ones will be covered in future tutorials. Suffice it to`
`                     say that there are very good reasons to put the perspective term in the W`
`                     coordinate of clip space vertices.</para>`
`             </note>`
`                 is always -1 in the Z. This means that our perspective term, when phrased as`
`                 division rather than multiplication, is simply P<subscript>z</subscript>/-1: the`
`                 negation of the camera-space Z coordinate.</para>`
`+				<!--TODO: Add an image showing the location of the projection plane in camera space. -->`
`             <para>Having a fixed eye position and projection plane makes it difficult to have`
`                 zoom-in/zoom-out style effects. This would normally be done by moving the plane`
`                 relative to the fixed eye point. There is a way to do this, however. All you need to`
`                     <para>You cannot select components that aren't in the source vector. So if you`
`                         have:</para>`
`                     <programlisting language="glsl">vec2 theVec;</programlisting>`
`-                    <para>You cannot do <literal>theVec.zz</literal>.</para>`
`+                    <para>You cannot do <literal>theVec.zz</literal> because it has no Z component.</para>`
`                 </listitem>`
`                 <listitem>`
`                     <para>You cannot select more than 4 components.</para>`
`                 </listitem>`
`             </itemizedlist>`
`             <para>These are the only rules. So you can have a <type>vec2</type> that you swizzle to`
`-                create a <type>vec4</type> (<literal>vec.yyyx</literal>), you can repeat components,`
`+                create a <type>vec4</type> (<literal>vec.yyyx</literal>); you can repeat components;`
`                 etc. Anything goes so long as you stick to those rules.</para>`
`             <para>You should also assume that swizzling is fast. This is not true of most CPU-based`
`                 vector hardware, but since the earliest days of programmable GPUs, swizzle selection`
`                 Z<subscript>clip</subscript>'s value by the camera space W. Well, our input camera`
`             space position's W coordinate is always 1. So performing the multiplication is valid, so`
`             long as this continues to be the case. Being able to do what we are about to do is part`
`-            of the reason why the W coordinate exists (the perspective divide is the other).</para>`
`+            of the reason why the W coordinate exists in our camera-space position values (the perspective divide is the other).</para>`
`+			<!--TODO: Talk about a linear system of equations. -->`
`         <para>Let us now re-express this again, using the coefficients of the equation above. You`
`             may recognize this reformulation, depending on your knowledge of linear algebra:</para>`
`         <equation>`
`                         at (0, 0, 1). However, this was not strictly necessary; we could have`
`                         altered our perspective transform algorithm to use a variable eye point.`
`                         Adjust the <phrase role="propername">ShaderPerspective</phrase> to implement`
`-                        an arbitrary perspective plane location (the size remains fixed at [-1, 1].`
`+                        an arbitrary perspective plane location (the size remains fixed at [-1, 1]).`
`                         You will need to offset the X, Y camera-space positions of the vertices by`
`                             E<subscript>x</subscript> and E<subscript>y</subscript> respectively,`
`                         but only <emphasis>after</emphasis> the scaling (for aspect ratio). And you`
`                         matrices involved, depending on the arithmetic operation. Multiplying two`
`                         matrices together can only be performed if the number of rows in the matrix`
`                         on the left is equal to the number of columns in the matrix on the right.`
`-                        Because of this, matrix multiplication is not commutative (A*B is not B*A;`
`+                        For this reason, among others, matrix multiplication is not commutative (A*B is not B*A;`
`                         sometimes B*A isn't even possible).</para>`
`                     <para>4x4 matrices are used in computer graphics to transform 3 or 4-dimensional`
`                         vectors from one space to another. Most kinds of linear transforms can be`

# Documents/Positioning/Tutorial 05.xml

`             is.</para>`
`         <para>The OpenGL specification defines a rasterization-based renderer. Rasterizers offer`
`             great opportunities for optimizations and hardware implementation, and using them`
`-            provides great power to the programmer. However, they're very stupid. When you strip out`
`-            all of the shaders and other logic, a rasterizer is basically just a triangle drawer.`
`-            That's all they know how to do. And they're very good at it.</para>`
`+            provides great power to the programmer. However, they're very stupid. A rasterizer is basically just a triangle drawer. Vertex shaders tell it what vertex positions are, and fragment shaders tell it what colors to put within that triangle. But no matter how fancy, a rasterization-based render is just drawing triangles.</para>`
`+			<para>That's fine in general because rasterizers are very fast. They are very good at drawing triangles.</para>`
`         <para>But rasterizers do exactly and only what the user says. They draw each triangle`
`                 <emphasis>in the order given</emphasis>. This means that, if there is overlap`
`             between multiple triangles in window space, the triangle that is rendered last will`
`-            win.</para>`
`+            be the one that is seen.</para>`
`         <para>This problem is called <glossterm>hidden surface elimination.</glossterm></para>`
`         <para>The first thing you might think of when solving this problem is to simply render the`
`-            farther objects first. This is called <glossterm>depth sorting.</glossterm> As you might`
`+            most distant objects first. This is called <glossterm>depth sorting.</glossterm> As you might`
`             imagine, this <quote>solution</quote> scales incredibly poorly. Doing it for each`
`             triangle is prohibitive, particularly with scenes with millions of triangles.</para>`
`         <para>And the worst part is that even if you put in all the effort, <emphasis>it doesn't`
`             sorting, but non-trivial cases have real problems. You can have an arrangement of 3`
`             triangles where each overlaps the other, such that there simply is no order you can`
`             render them in to achieve the right effect.</para>`
`+			<!--TODO: Show the 3 triangle arrangement.-->`
`         <para>Even worse, it does nothing for interpenetrating triangles; that is, triangles that`
`             pass through each other in 3D space (as opposed to just from the perspective of the`
`             camera).</para>`
`         <para>Depth sorting isn't going to cut it; clearly, we need something better.</para>`
`         <para>One solution might be to tag fragments with the distance from the viewer. Then, if a`
`-            fragment is about to be written is going to write a farther distance (ie: the fragment`
`-            is behind what was already written), we simply do not write that fragment. That way, if`
`+            fragment is about to be written has a farther distance (ie: the fragment`
`+            is behind what was already draw), we simply do not write that fragment. That way, if`
`             you draw something behind something else, the fragments that were written by the higher`
`             objects will prevent you from writing the farther away ones.</para>`
`         <para>The <quote>tag</quote> is the window-space Z value. You may recall from <link`
`                 linkend="tut00_window_space">the introduction,</link> the window-space Z position of`
`             a fragment ranges from 0 to 1, where 0 is the closest and 1 is the farthest.</para>`
`-        <para>Colors output from the fragment shader are output into the color buffer. Therefore it`
`+        <para>Colors output from the fragment shader are output into the color image buffer. Therefore it`
`             naturally follows that depth values would be stored in a <glossterm>depth`
`                 buffer</glossterm> (also called a <glossterm>z buffer</glossterm>, because it stores`
`-            the Z value). The depth buffer is an image, the same size as the main color buffer, that`
`+            Z values). The depth buffer is an image, the same size as the main color buffer, that`
`             stores depth values as pixels rather than colors. Where a color is a 4-component vector,`
`             a depth is just a single floating-point value.</para>`
`         <para>Like the color buffer, the depth buffer for the main window is created automatically`
`                 coordinates. But <function>glViewport</function> only defines the transform for the`
`                 X and Y coordinates of the NDC-space vertex positions.</para>`
`             <para>The window-space Z coordinate ranges from [0, 1]; the transformation from NDC's`
`-                [-1, 1] is defined with the <function>glDepthRange</function> function. This`
`+                [-1, 1] range is defined with the <function>glDepthRange</function> function. This`
`                 function takes 2 floating-point parameters: the <glossterm>range zNear</glossterm>`
`                 and the <glossterm>range zFar</glossterm>. These values are in window-space; they`
`                 define a simple linear mapping from NDC space to window space. So if zNear is 0.5`
`                 </mediaobject>`
`             </figure>`
`             <para>No amount of depth sorting will help with <emphasis>that</emphasis>.</para>`
`-            <tip>`
`+            <sidebar>`
`                 <title>Fragments and Depth</title>`
`                 <para>Way back in the <link linkend="tut_00">introduction</link>, we said that part`
`                     of the fragment's data was the window-space position of the fragment. This is a`
`                     potentially complex fragment shaders. Indeed, most hardware nowadays has`
`                     complicated early z culling hardware that can discard multiple fragments with`
`                     one test.</para>`
`-                <para>The moment your fragment shader has to write anything to`
`-                        <varname>gl_FragDepth</varname>, all of those optimizations go away. So`
`-                    generally, you should only write a depth value if you`
`-                        <emphasis>really</emphasis> need it.</para>`
`-            </tip>`
`+                <para>The moment your fragment shader writes anything to`
`+                        <varname>gl_FragDepth</varname>, all of those optimizations have to go away. So`
`+                    generally, you should only write a depth value yourself if you`
`+                        <emphasis>really</emphasis> need to do it.</para>`
`+            </sidebar>`
`         </section>`
`     </section>`
`     <section>`
`         <para>If you recall back to the <link linkend="ShaderPerspective">Perspective projection`
`                 tutorial,</link> we choose to use some special hardware in the graphics chip to do`
`             the final division of the W coordinate, rather than doing the entire perspective`
`-            projection ourselves. At the time, it was promised that we would see why this is`
`+            projection ourselves in the vertex shader. At the time, it was promised that we would see why this is`
`             hardware functionality rather than something the shader does.</para>`
`         <para>Let us review the full math operation we are computing here:</para>`
`         <equation>`
`             some of the values from the depth buffer also have a depth of 0. Since our depth test is`
`                 <literal>GL_LESS</literal>, the incoming 0 is not less than the depth buffer's 0, so`
`             the part of the second object does not get rendered. This is pretty much the opposite of`
`-            where we started. We could change it to <literal>GL_LEQUAL</literal>, but that only gets`
`+            where we started: previous rendered objects are in front of newer ones. We could change it to <literal>GL_LEQUAL</literal>, but that only gets`
`             us to <emphasis>exactly</emphasis> where we started.</para>`
`         <para>So a word of warning: be careful with depth clamping when you have overlapping objects`
`             near the planes. Similar problems happen with the far plane, though backface culling can`
`                 off the top and bottom. If near/far clipping isn't active, then the frustum becomes`
`                 a pyramid. The other 4 clipping planes are still fully in effect. Clip-space`
`                 vertices with a W of less than 0 are all outside of the boundary of any of the other`
`-                four clipping planes. And the only clip-space point with a W of 0 that is within`
`+                four clipping planes.</para>`
`+			<para>The only clip-space point with a W of 0 that is within`
`                 this volume is the homogeneous origin point: (0, 0, 0, 0); everything else will be`
`-                clipped.</para>`
`+                clipped. And a triangle made from three positions that all are at the same position would have no area; it would therefore generate no fragments anyway. It can be safely eliminated before the perspective divide.</para>`
`         </note>`
`     </section>`
`     <!--`
`                     the camera.</para>`
`             </listitem>`
`             <listitem>`
`-                <para>Clipping holes can be repaired by activating depth clamping, so long as there`
`-                    is no overlap.</para>`
`+                <para>Clipping holes can be repaired to a degree by activating depth clamping, so long as there`
`+                    is no overlap. And as long as the triangles don't extend beyond 0 in camera space.</para>`
`             </listitem>`
`+			<!--TODO: Reinstate this.-->`
`+			<!--`
`             <listitem>`
`                 <para>Depth buffers have a finite precision, and this can cause z-fighting.`
`                     Z-fighting can be repaired by moving the Camera zNear forward, or moving objects`
`                     farther apart.</para>`
`             </listitem>`
`+			-->`
`         </itemizedlist>`
`         <section>`
`             <title>OpenGL Functions of Note</title>`
`                         depth, then the early depth test optimization will not be active.</para>`
`                 </glossdef>`
`             </glossentry>`
`+			<!--TODO: Reinstate this.-->`
`+			<!--`
`             <glossentry>`
`                 <glossterm>z-fighting</glossterm>`
`                 <glossdef>`
`                         is to try to move the camera zNear further from 0.</para>`
`                 </glossdef>`
`             </glossentry>`
`+			-->`
`             <glossentry>`
`                 <glossterm>homogeneous coordinate system</glossterm>`
`                 <glossdef>`

# Documents/Positioning/Tutorial 06.xml

`         <para>Throughout this series of tutorials, we have discussed a number of different spaces.`
`             We have seen OpenGL-defined spaces like normalized device coordinate (NDC) space,`
`             clip-space, and window space. And we have seen user-defined spaces like camera space.`
`-            But we have yet to talk about what a space actually is.</para>`
`+            But we have yet to really discuss about what a space actually is.</para>`
`         <para>A <glossterm>space</glossterm> is a shorthand term for a <glossterm>coordinate`
`                 system.</glossterm> For the purposes of this conversation, a coordinate system or`
`             space consists of the following:</para>`
`                 clip-space, and this was done using a matrix. Perspective projection (and`
`                 orthographic, for that matter) are simply a special kind of transformation.</para>`
`             <para>This tutorial will cover a large number of different kinds of transform`
`-                operations, and how to implement them in OpenGL.</para>`
`+                operations and how to implement them in OpenGL.</para>`
`         </section>`
`         <section>`
`             <title>Model Space</title>`
`             <para>Before we begin, we must define a new kind of space: <glossterm>model`
`-                    space.</glossterm> This is a user-defined space; but unlike camera space, model`
`+                    space.</glossterm> This is a user-defined space, but unlike camera space, model`
`                 space does not have a single definition. It is instead a catch-all term for the`
`                 space that a particular object begins in. Coordinates in vertex buffers, passed to`
`                 the vertex shaders as vertex attributes are <foreignphrase>de facto</foreignphrase>`
`             the origin point of the initial space relative to the destination space. Since all of`
`             the coordinates in a space are relative to the origin point of that space, all a`
`             translation needs to do is add a vector to all of the coordinates in that space. The`
`-            vector added tothese values is the location of where the user wants the origin point`
`+            vector added to these values is the location of where the user wants the origin point`
`             relative to the destination coordinate system.</para>`
`         <para>Here is a more concrete example. Let us say that an object which in its model space is`
`             near its origin. This means that, if we want to see that object in front of the camera,`
`         <para>For any two spaces, the orientation transformation between then can be expressed as`
`             rotating the source space by some angle around a particular axis (also in the initial`
`             space). This is true for any change of orientation.</para>`
`-        <para>A common rotation question is to compute a rotation around an arbitrary axis. Or more`
`+        <para>A common rotation question is to compute a rotation around an arbitrary axis. Or to put it more`
`             correctly, to determine the orientation of a space if it is rotated around an arbitrary`
`-            axis relative to the initial axis. The axis of rotation is expressed in terms of the`
`+            axis. The axis of rotation is expressed in terms of the`
`             initial space. In 2D, there is only one axis that can be rotated around and still remain`
`             within that 2D plane: the Z-axis.</para>`
`         <para>In 3D, there are many possible axes of rotation. It does not have to be one of the`
`         </equation>`
`         <para>All of these matrices are such that, from the point of view of an observer looking`
`             down the axis of rotation (the direction of the axis is pointed into the eye of the`
`-            observer), the object rotates counter-clockwise.</para>`
`+            observer), the object rotates counter-clockwise with positive angles.</para>`
`         <para>The <phrase role="propername">Rotations</phrase> tutorial shows off each of these`
`             rotation matrix functions. Similar to how the others work, there are multiple instances`
`             rendered based on functions.</para>`
`                 <mathphrase>S*R</mathphrase>`
`             </inlineequation> is not the same as <inlineequation>`
`                 <mathphrase>R*S</mathphrase>`
`-            </inlineequation>. However, it <emphasis>is</emphasis> associative: <inlineequation>`
`+            </inlineequation>. However, it is <emphasis>associative</emphasis>: <inlineequation>`
`                 <mathphrase>(S*R)*T</mathphrase>`
`             </inlineequation> is the same as <inlineequation>`
`                 <mathphrase>S*(R*T)</mathphrase>`
`                 of the transforms of its parent transform, plus its own model space transform.`
`                 Models in this transform have a parent-child relationship to other objects.</para>`
`             <para>For the purposes of this discussion, each complete transform for a model in the`
`-                hierarchy call be called a <glossterm>node.</glossterm> Each node is defined by a`
`+                hierarchy will be called a <glossterm>node.</glossterm> Each node is defined by a`
`                 specific series of transformations, which when combined yield the complete`
`                 transformation matrix for that node. Usually, each node has a translation, rotation,`
`                 and scale, though the specific transform can be entirely arbitrary. What matters is`
`             matrix. Each transform defines a new coordinate system, and the next transform is based`
`             on an object in the <emphasis>new</emphasis> space. For example, if we apply the roll`
`             first, we have now changed what the axis for the subsequent yaw is.</para>`
`-        <para>You can use any order that you like, so long as you understand that what these angles`
`+        <para>You can use any order that you like, so long as you understand what these angles`
`             mean. If you apply the roll first, your yaw and pitch must be in terms of the new roll`
`             coordinate system, and not the original coordinate system. That is, a change that is`
`             intended to only affect the roll of the final space may need yaw or pitch changes to`
`                         function that will render this node, given the matrix stack. It would render`
`                         itself, then recursively render its children. The node would also have a way`
`                         to define the size (in world-space) and origin point of the rectangle to be`
`-                        drawn. The scene would be rendered by passing the identity matrix to the`
`-                        root.</para>`
`+                        drawn.</para>`
`                 </listitem>`
`                 <listitem>`
`                     <para>Given the generalized Hierarchy code, remove the matrix stack. Use objects`