Jason McKesson avatar Jason McKesson committed cfd6657

Tutorial 5 mostly complete. It still needs an image or two, and filling out the last few sections.

Comments (0)

Files changed (15)

Documents/Basics/Tutorial 00.xml

                     from a single triangle are processed in is irrelevant; since a single triangle
                     lies in a single plane, fragments generated from it cannot possible overlap.
                     However, the fragments from another triangle can possibly overlap. Since order
-                    is important in a rasterizer, the fragments from one triangle must be processed
-                    before the fragments from another triangle.</para>
+                    is important in a rasterizer, the fragments from one triangle must all be
+                    processed before the fragments from another triangle.</para>
             </formalpara>
             <para>This phase is quite arbitrary. The user of OpenGL has a lot of options of how to
                 decide what color to assign a fragment. We will cover this step in detail throughout

Documents/Building the Tutorials.xml

                 <filename>glloader</filename> directory. Type <literal>premake4
                     <replaceable>plat</replaceable></literal>, where <replaceable>plat</replaceable>
             is the name of the platform of choice. For Visual Studio 2008, this would be
-                <quote>vs2008;</quote> for VS2010, this would be <quote>vs2010.</quote> The Premake
+                <quote>vs2008</quote>; for VS2010, this would be <quote>vs2010.</quote> The Premake
             documentation has a full list of output platforms.</para>
         <para>This will create the appropriate build file. Use this build file as is appropriate for
             the platform, compiling both debug and release.</para>
                 <filename>framwork/framework.cpp</filename>; it is shared by every tutorial. It does
             the basic boilerplate work: creating a FreeGLUT window, etc. This allows the tutorial
             source files to contain OpenGL-specific code.</para>
-        <para/>
     </simplesect>
 </article>

Documents/Outline.xml

                 <listitem>
                     <para>UBOs for shared uniform data (common matrices).</para>
                 </listitem>
+                <listitem>
+                    <para>The dangers of having an explicit world space (precision problems with
+                        large numbers).</para>
+                </listitem>
             </itemizedlist>
         </section>
     </section>
                 </listitem>
                 <listitem>
                     <para>Implementing lighting in a vertex shader for both directional and point
-                        lights. Combining results from </para>
+                        lights. Combining results from both kinds of lighting into a single
+                        value.</para>
                 </listitem>
             </itemizedlist>
         </section>
Add a comment to this file

Documents/Positioning/Depth Buffering Major Overlap.png

Added
New image
Add a comment to this file

Documents/Positioning/Depth Buffering Mild Overlap.png

Added
New image
Add a comment to this file

Documents/Positioning/Depth Buffering.png

Added
New image
Add a comment to this file

Documents/Positioning/Depth Clamping.png

Added
New image
Add a comment to this file

Documents/Positioning/Double Depth Clamping.png

Added
New image
Add a comment to this file

Documents/Positioning/MatrixPerspective.png

Added
New image
Add a comment to this file

Documents/Positioning/MatrixPerspectiveSkew.png

Added
New image
Add a comment to this file

Documents/Positioning/Overlap No Depth.png

Added
New image

Documents/Positioning/Tutorial 04.xml

                 called <quote>perspective.</quote></para>
         </section>
     </section>
-    <section>
+    <section xml:id="ShaderPerspective">
         <?dbhtml filename="Tut04 Perspective Projection.html" ?>
         <title>Perspective Projection</title>
         <para>A <glossterm>projection</glossterm>, for the purposes of rendering, is a way to
             data is column-major, we set it to <literal>GL_FALSE</literal>. The last parameter is
             the matrix data itself.</para>
         <para>Running this program will give us:</para>
-        <!--TODO: Create an image of this tutorial's execution-->
+        <figure>
+            <title>Perspective Matrix</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="MatrixPerspective.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
         <para>The same thing we had before. Only now done with matrices.</para>
     </section>
     <section>
         <para>If you run the last program, and resize the window, the viewport resizes with it.
             Unfortunately, this also means that what was once a rectangular prism with a square
             front becomes elongated.</para>
-        <!--TODO: Show a picture of the elongated prism.-->
+        <figure>
+            <title>Bad Aspect Ratio</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="MatrixPerspectiveSkew.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
         <para>This is a problem of <glossterm>aspect ratio</glossterm>, the ratio of an image's
             width to its height. Currently, when you change the window's dimensions, the code calls
                 <function>glViewport</function> to tell OpenGL the new size. This changes OpenGL's

Documents/Positioning/Tutorial 05.xml

                 properly), and then we draw it with a call to <function>glDrawElements</function>.
                 This step is repeated for the second object.</para>
             <para>Running this tutorial will show the following image:</para>
-            <!--TODO: Show image of this tutorial.-->
-            <para>The smaller object is actually behind the larger one, in terms of their Z distance
-                to the camera. We're using a perspective transform, so it make sense that more
-                distant objects appear smaller. However, if the smaller object is behind the larger
-                one, why is it rendered on top of the one in front?</para>
+            <figure>
+                <title>Overlapping Objects</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Overlap%20No%20Depth.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>The two objects are essentially flipped versions of the same one. One object
+                appears smaller than the other because it is farther away, in terms of its Z
+                distance to the camera. We are using a perspective transform, so it make sense that
+                more distant objects appear smaller. However, if the smaller object is behind the
+                larger one, why is it rendered on top of the one in front?</para>
             <para>Before we solve this mystery, there is one minor issue we should cover
                 first.</para>
         </section>
         <title>Overlap and Depth Buffering</title>
         <para>Regardless of how we render the objects, there is a strange visual problem with what
             we're rendering:</para>
-        <!--TODO: Show the image of the tutorial again.-->
+        <informalfigure>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Overlap%20No%20Depth.png"/>
+                </imageobject>
+            </mediaobject>
+        </informalfigure>
         <para>If the smaller object is truly behind the larger one, why is it being rendered on top
             of the larger one? Well, to answer that question, we need to remember what OpenGL
             is.</para>
             provides great power to the programmer. However, they're very stupid. When you strip out
             all of the shaders and other logic, a rasterizer is basically just a triangle drawer.
             That's all they know how to do. And they're very good at it.</para>
-        <para>But rasterizers do exactly and only what the user says. They draw triangles in a given
-            sequence. This means that, if there is overlap between multiple triangles in window
-            space, the triangle that is rendered last will win.</para>
+        <para>But rasterizers do exactly and only what the user says. They draw each triangle
+                <emphasis>in the order given</emphasis>. This means that, if there is overlap
+            between multiple triangles in window space, the triangle that is rendered last will
+            win.</para>
         <para>The first thing you might think of when solving this problem is to simply render the
             farther objects first. This is called <glossterm>depth sorting.</glossterm> As you might
             imagine, this <quote>solution</quote> scales incredibly poorly. Doing it for each
                 work</emphasis>. Well, not always. Many trivial cases can be solved via depth
             sorting, but non-trivial cases have real problems. You can have an arrangement of 3
             triangles where each overlaps the other, such that there simply is no order you can
-            render them in to achieve the right effect. Clearly, we need something better.</para>
-        <para>One solution would be to tag fragments with the distance from the viewer. Then, if a
+            render them in to achieve the right effect.</para>
+        <para>Even worse, it does nothing for interpenetrating triangles; that is, triangles that
+            pass through each other in 3D space (as opposed to just from the perspective of the
+            camera).</para>
+        <para>Depth sorting isn't going to cut it; clearly, we need something better.</para>
+        <para>One solution might be to tag fragments with the distance from the viewer. Then, if a
             fragment is about to be written is going to write a farther distance (ie: the fragment
             is behind what was already written), we simply do not write that fragment. That way, if
             you draw something behind something else, the fragments that were written by the higher
                 <literal>GL_LEQUAL</literal> (&lt;=), <literal>GL_GEQUAL</literal> (>=),
                 <literal>GL_EQUAL</literal>, or <literal>GL_NOTEQUAL</literal>. The test function
             puts the incoming fragment's depth on the left of the equation and on the right is the
-            depth from the depth buffer.</para>
+            depth from the depth buffer. So GL_LESS means that, when the incoming fragment's depth
+            is less than the depth from the depth buffer, the incoming fragment is not
+            written.</para>
         <para>With the fragment depth being something that is part of a fragment's output, you might
             imagine that this is something you have to compute in a fragment shader. You certainly
             can, but the fragment's depth is normally just the window-space Z coordinate of the
                     for your platform to create a depth buffer if you need to do depth
                     buffering.</para>
             </note>
+            <para>Our new image looks like this:</para>
+            <figure>
+                <title>Depth Buffering</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Depth%20Buffering.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>Which makes a lot more sense. No matter what order we draw the objects in, we get
+                a reasonable result.</para>
+            <para>Let's test our depth buffering a bit more. Let's create a little overlap between
+                the two objects. Change the first offset uniform statement in
+                    <function>display</function> to be this:</para>
+            <programlisting>glUniform3f(offsetUniform, 0.0f, 0.0f, -0.75f);</programlisting>
+            <para>We now get some overlap, but the result is still reasonable:</para>
+            <figure>
+                <title>Mild Overlap</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Depth%20Buffering%20Mild%20Overlap.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>We can even change the line to cause major overlap without incident:</para>
+            <programlisting>glUniform3f(offsetUniform, 0.0f, 0.0f, -1.0f);</programlisting>
+            <para>Which gives us:</para>
+            <figure>
+                <title>Major Overlap</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Depth%20Buffering%20Major%20Overlap.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>No amount of depth sorting will help with <emphasis>that</emphasis>.</para>
+        </section>
+        <section>
+            <title>Fragments and Depth</title>
+            <para>Way back in the <link linkend="tut_00">introduction</link>, we said that part of
+                the fragment's data was the window-space position of the fragment. This is a 3D
+                coordinate; the Z value is naturally what would be written to the depth buffer. We
+                saw <link linkend="FragPosition">later</link> that the built-in input variable
+                    <varname>gl_FragCoord</varname> holds this position;
+                    <literal>gl_FragCoord.z</literal> is the window-space depth of the fragment, as
+                generated by OpenGL.</para>
+            <para>Part of the job of the fragment shader is to generate output colors for the output
+                color images. Another part of the job of the fragment shader is to generate the
+                output <emphasis>depth</emphasis> of the fragment.</para>
+            <para>If that's true, then how can we use the same fragment shader as we did before
+                turning on depth buffering? The default behavior of OpenGL is, if a fragment shader
+                does <emphasis>not</emphasis> write to the output depth, then simply take the
+                generated window-space depth as the final depth of the fragment.</para>
+            <para>Oh, you could do this manually. We could add the following statement to the
+                    <function>main</function> function of our fragment shader:</para>
+            <programlisting>gl_FragDepth = gl_FragCoord.z</programlisting>
+            <para>This is, in terms of behavior a noop; it does nothing OpenGL wouldn't have done
+                itself. However, in terms of <emphasis>performance</emphasis>, this is a drastic
+                change.</para>
+            <para>The reason fragment shaders aren't required to have this line in all of them is to
+                allow for certain optimizations. If the OpenGL driver can see that you do not set
+                    <varname>gl_FragDepth</varname> anywhere in the fragment shader, then it can
+                dramatically improve performance in certain cases.</para>
+            <para>If the driver knows that the output fragment depth is the same as the generated
+                one, it can do the whole depth test <emphasis>before</emphasis> executing the
+                fragment shader. This is called <glossterm>early depth test</glossterm> or
+                    <glossterm>early-z</glossterm>. This means that it can discard fragments
+                    <emphasis>before</emphasis> wasting precious time executing potentially complex
+                fragment shaders. Indeed, most hardware nowadays has complicated early z culling
+                hardware that can discard multiple fragments with one test.</para>
+            <para>The moment your fragment shader has to write anything to
+                    <varname>gl_FragDepth</varname>, all of those optimizations go away. So
+                generally, you should only write a depth value if you <emphasis>really</emphasis>
+                need it.</para>
         </section>
         <section>
             <title>Depth Precision</title>
             <para>Let us take just the front half of NDC space as an example. In NDC space, this is
                 the range [-1, 0]. In camera space, the exact range depends on the camera zNear and
                 zFar values. In the above example where the camera range is [-1, -3], the range that
-                maps to the front half of NDC space is [-1, -1.5], only a quarter of the range. The
-                equation to compute this for any camera range is pretty simple:</para>
-            <!--TODO: Add an image of the equation 2NF / F+N-->
-            <para>Where F and N are both positive values.</para>
-            <para>We can see from this equation that, the larger the difference between N and F, the
-                    <emphasis>smaller</emphasis> the half-space. If the camera range goes from
-                [-500, -1000], then half of NDC space represents the range from [-500, -666.67].
-                This is 33.3% of the camera space range mapping to 50% of the NDC range. However, if
-                the camera range goes from [-1, -1000], fully <emphasis>half</emphasis> of NDC space
-                will represent only [-1, -1.998]; less than 0.1% of the range.</para>
+                maps to the front half of NDC space is [-1, -1.5], only a quarter of the
+                range.</para>
+            <para>The larger the difference between N and F, the <emphasis>smaller</emphasis> the
+                half-space. If the camera range goes from [-500, -1000], then half of NDC space
+                represents the range from [-500, -666.67]. This is 33.3% of the camera space range
+                mapping to 50% of the NDC range. However, if the camera range goes from [-1, -1000],
+                fully <emphasis>half</emphasis> of NDC space will represent only [-1, -1.998]; less
+                than 0.1% of the range.</para>
             <para>This has real consequences for the precision of your depth buffer. Earlier, we
                 said that the depth buffer stores floating-point values. While this is conceptually
                 true, most depth buffers actually use fixed-point values and convert them into
                 triangle will start showing through triangles that are supposed to be farther away.
                 If the camera or these objects are in motion, horrible flickering artifacts can be
                 seen. This is called <glossterm>z-fighting,</glossterm> as multiple objects appear
-                to be fighting each other.</para>
+                to be fighting each other when animated.</para>
             <!--TODO: Show an image of z-fighting.-->
-            <para>Most depth buffers are not 16-bit; these days, the default is 24-bit.
-                Half-precision of a 24-bit is 12-bit, which is not too far from a 16-bit depth
-                buffer in and of itself. If you use a 24-bit depth buffer, it turns out that you
-                lose half precision on a [-1, -1000] camera range at [-1, -891], which is 89% of the
-                range. At a 1:10,000 ratio, you have 45% of the camera range in most of the
-                precision. At 1:100,000 this drops to ~7%, and at 1:1,000,000 it is down to
-                0.8%.</para>
+            <para>Fortunately, the days of 16-bit depth buffers are long over; the modern standard
+                is (and has been for years now) 24-bits of precision. Half-precision of 24-bits is
+                12-bits, which is not too far from a 16-bit depth buffer in and of itself. If you
+                use a 24-bit depth buffer, it turns out that you lose half precision on a [-1,
+                -1000] camera range at [-1, -891], which is 89% of the range. At a 1:10,000 ratio,
+                you have 45% of the camera range in most of the precision. At 1:100,000 this drops
+                to ~7%, and at 1:1,000,000 it is down to 0.8%.</para>
             <para>The most important question to be asked is this: is this bad? Not really.</para>
             <para>Let's take the 1:100,000 example. 7% may not sound like a lot, but this is still a
                 range of [-1, -7573]. If these units are conceptually in inches, then you've got
     <section>
         <?dbhtml filename="Tut05 Boundaries and Clipping.html" ?>
         <title>Boundaries and Clipping</title>
-        <para/>
+        <para>If you recall back to the <link linkend="ShaderPerspective">Perspective projection
+                tutorial,</link> we choose to use some special hardware in the graphics chip to do
+            the final division of the W coordinate, rather than doing the entire perspective
+            projection ourselves. At the time, it was promised that we would see why this is
+            hardware functionality rather than something the shader does.</para>
+        <para>Let us review the full math operation we are computing here:</para>
+        <equation>
+            <title>Perspective Computation</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="PerspectiveFunc.svg" width="300" format="SVG"/>
+                </imageobject>
+            </mediaobject>
+        </equation>
+        <para><literal>R</literal> is the perspective projected position, P is the camera-space
+            position, E<subscript>z</subscript> is the Z-position of the eye relative to the plane
+            (assumed to be -1), and P<subscript>z</subscript> is the camera-space Z position.</para>
+        <para>One question you should always ask when dealing with equations is this: can it divide
+            by zero? And this equation certainly can; if the camera-space position of any vertex is
+            ever exactly 0, then we have a problem.</para>
+        <para>This is where clip-space comes in to save the day. See, until we actually
+                <emphasis>do</emphasis> the divide, everything is fine. A 4-dimensional vector that
+            will be divided by the fourth component but hasn't <emphasis>yet</emphasis> is still
+            valid, even if the fourth component is zero. This kind of coordinate system is called a
+                <glossterm>homogeneous coordinate system</glossterm>. It is a way of talking about
+            things that you could not talk about in a normal, 3D coordinate system. Like dividing by
+            zero, which in visual terms refers to coordinates at infinity.</para>
+        <para>This is all nice theory, but we still know that the clip-space positions need to be
+            divided by their W coordinate. So how to we get around this problem?</para>
+        <para>First, we know that a W of zero means that the camera-space Z position of the point
+            was zero as well. We also know that this point <emphasis>must</emphasis> lie outside of
+            the viable region for camera space. That is because of the camera Z range: camera zNear
+                <emphasis>must</emphasis> be strictly greater than zero. Thus any point with a
+            camera Z value of 0 must be in front of the zNear, and therefore outside of the visible
+            world.</para>
+        <para>Since the vertex coordinate is not going to be visible anyway, why bother drawing it
+            and dividing by that pesky 0? Well, because that vertex happens to be part of a
+            triangle, and if part of the triangle is visible, we have to draw it.</para>
+        <para>But we don't have to draw <emphasis>all</emphasis> of it.</para>
+        <para><glossterm>Clipping</glossterm> is the process of taking a triangle and breaking it up
+            into smaller triangles, such that only the part of the original triangle that is within
+            the viewable region remains. This may generate only one triangle, or it may generate
+            multiple triangles.</para>
+        <para>Any vertex attributes associated with that vertex are interpolated (based on the
+            vertex shader's interpolation qualifiers) to determine the relative value of the
+            post-clipping vertex.</para>
+        <para>As you might have guessed, clipping happens in <emphasis>clip space</emphasis>, not
+            NDC space. Hence the name. Since clip-space is a homogeneous coordinate system, we don't
+            have to worry about those pesky zeros. Unfortunately, because homogeneous spaces are not
+            easy to draw, we can't show you what it would look like. But we can show you what it
+            would look like if you clipped in camera space, in 2D:</para>
+        <!--TODO: Create images of clipping, one that creates a single triangle, and one that creates more than one-->
+        <para>To see the results of clipping in action, run the <phrase role="propername">Vertex
+                Clipping</phrase> tutorial. It is the same as the one for depth buffering, except
+            one object has been moved very close to the zNear plane. Close enough that part of it is
+            beyond the zNear and therefore is not part of the viewable area:</para>
+        <figure>
+            <title>Vertex Clipping</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Vertex%20Clipping.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <section>
+            <title>A Word on Clipping Performance</title>
+            <para>We have phrased the discussion of clipping as a way to avoid dividing by zero for
+                good reason. The OpenGL specification states that clipping must be done against all
+                sides of the viewable region. And it certainly appears that way; if you move objects
+                far enough away that they overlap with zFar, then you won't see the objects.</para>
+            <para>You can also see apparent clipping with objects against the four sides of the view
+                frustum. To see this, you would need to modify the viewport with
+                    <function>glViewport</function>, so that only part of the window is being
+                rendered to. If you move objects to the edge of the viewport, you will find that
+                part of them does not get rendered outside this region.</para>
+            <para>So clipping is happening all the time?</para>
+            <para>Of course not. Clipping takes triangles and breaks them into pieces using
+                4-dimensional homogeneous mathematics. One triangle can be broken up into several;
+                depending on the location of the triangle, you can get quite a few different pieces.
+                The simple act of turning one triangle into several is hard and time
+                consuming.</para>
+            <para>So, if OpenGL states that this must happen, and hardware doesn't do it, then
+                what's going on?</para>
+            <para>If we hadn't told you just now that the hardware doesn't do clipping most of the
+                time, could you tell? No. And that's the point: OpenGL specifies
+                    <emphasis>apparent</emphasis> behavior; the spec doesn't care if you actually do
+                clipping or not. All the spec cares about is that the user can't tell the difference
+                in terms of the output.</para>
+            <para>That's how hardware can get away with the early-z optimization mentioned before;
+                the OpenGL spec says that the depth test must happen after the fragment program
+                executes. But if the fragment shader doesn't modify the depth, then would you be
+                able to tell the difference if it did the depth test before the fragment shader? No;
+                if it passes, it would have passed either way, and the same goes for failing.</para>
+            <para>Instead of clipping, the hardware usually just lets the triangles go through if
+                part of the triangle is within the visible region. It generates fragments from those
+                triangles, and if a fragment is outside of the visible window, it is discarded
+                before any fragment processing takes place.</para>
+            <para>Hardware usually can't do this however, if any vertex of the triangle has a
+                clip-space W &lt;= zero. In terms of a perspective projection, this means that part
+                of the triangle is fully behind the eye, rather than just behind the camera zNear
+                plane. In these cases, clipping is much more likely to happen.</para>
+            <para>Even so, clipping only happens if the triangle is partially visible; a triangle
+                that is entirely in front of the zNear plane is dropped entirely.</para>
+            <para>In general, you should try to avoid rendering things that will clip against the
+                eye plane (clip-space W &lt;= 0, or camera-space Z >= 0). You don't need to be
+                pedantic about it; long walls and the like are fine. But, particularly for low-end
+                hardware, a lot of clipping can really kill performance.</para>
+        </section>
     </section>
     <section>
         <title>Depth Clamping</title>
         <para>That's all well and good, but this:</para>
-        <!--TODO: Same image from clipped tutorial as before-->
-        <para>This is never a good thing. Sure, it keeps the hardware from dividing by zero, but it
-            looks really bad. It's showing the inside of an object that has no insides. Plus, you
-            can also see that it has no backside (since we're doing face culling); you can see right
-            through to the object behind it.</para>
+        <informalfigure>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Vertex%20Clipping.png"/>
+                </imageobject>
+            </mediaobject>
+        </informalfigure>
+        <para>This is never a good thing. Sure, it keeps the hardware from dividing by zero, which I
+            guess is important, but it looks really bad. It's showing the inside of an object that
+            has no insides. Plus, you can also see that it has no backside (since we're doing face
+            culling); you can see right through to the object behind it.</para>
         <para>If computer graphics is an elaborate illusion, then clipping utterly
-                <emphasis>shatters</emphasis> this illusion. What can we do about this?</para>
+                <emphasis>shatters</emphasis> this illusion. It's a big, giant hole that screams,
+                    <quote><emphasis>this is fake!</emphasis></quote> as loud as possible to the
+            user. What can we do about this?</para>
         <para>The most common technique is to simply not allow it. That is, know how close objects
             are getting to the near clipping plane (ie: the camera) and don't let them get close
             enough to clip.</para>
-        <para>A more reasonable mechanism is <glossterm>depth clamping</glossterm>.</para>
+        <para>And while this can <quote>function</quote> as a solution, it isn't exactly good. It
+            limits what you can do with objects and so forth.</para>
+        <para>A more reasonable mechanism is <glossterm>depth clamping</glossterm>. What this does
+            is turn off near/far plane clipping altogether. Instead, simply causes the depth for
+            these fragments to be the minimum or maximum values, depending on which plane they clip
+            against.</para>
+        <para>We can see this in the <phrase role="propername">Depth Clamping</phrase> tutorial.
+            This tutorial is identical to the vertex clipping one, except that the
+                <function>keyboard</function> function has changed as follows:</para>
+        <example>
+            <title>Depth Clamping On/Off</title>
+            <programlisting>void keyboard(unsigned char key, int x, int y)
+{
+    static bool bDepthClampingActive = false;
+    switch (key)
+    {
+    case 27:
+        glutLeaveMainLoop();
+        break;
+    case 32:
+        if(bDepthClampingActive)
+            glDisable(GL_DEPTH_CLAMP);
+        else
+            glEnable(GL_DEPTH_CLAMP);
+        
+        bDepthClampingActive = !bDepthClampingActive;
+        break;
+    }
+}</programlisting>
+        </example>
+        <para>When you press the space bar (ASCII code 32), the code will toggle depth clamping,
+            with the
+                <function>glEnable</function>/<function>glDisable</function>(<literal>GL_DEPTH_CLAMP</literal>)
+            calls. It will start with depth clamping off, since that is the OpenGL default.</para>
+        <para>When you run the tutorial, you will see what we saw in the last one; pressing the
+            space bar shows this:</para>
+        <figure>
+            <title>Depth Clamping</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Depth%20Clamping.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>This looks correct; it appears as if all of our problems are solved.</para>
+        <para>Appearances can be deceiving. Let's see what happens if you move the other object
+            forward, so that the two intersect like in the earlier part of the tutorial:</para>
+        <figure>
+            <title>Depth Clamp With Overlap</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Double%20Depth%20Clamping.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>Oops. Part of it looks right, just not the part where the depth is being clamped.
+            What's going on?</para>
+        <para>Well, recall what depth clamping does; it makes fragment depth values outside of the
+            range be clamped to within the range. So depth values smaller than depth zNear become
+            depth zNear, and values larger than depth zFar become depth zFar.</para>
+        <para>Therefore, when you go to render the second object, some of the clamped fragments from
+            the first are there. So the incoming fragment from the new object has a depth of 0, and
+            some of the values from the depth buffer also have a depth of 0. Since our depth test is
+                <literal>GL_LESS</literal>, the incoming 0 is not less than the depth buffer's 0, so
+            the part of the second object does not get rendered. This is pretty much the opposite of
+            where we started. We could change it to <literal>GL_LEQUAL</literal>, but that only gets
+            us to <emphasis>exactly</emphasis> where we started.</para>
+        <para>So a word of warning: be careful with depth clamping when you have overlapping objects
+            near the planes. Similar problems happen with the far plane, though backface culling can
+            be a help in some cases.</para>
+        <note>
+            <para>If you're wondering what happens when you have depth clamping and a clip-space W
+                &lt;= 0 and you don't do clipping, then... well, OpenGL doesn't say. At least, it
+                doesn't say specifically. All it says is that clipping against the near and far
+                planes stops and that fragment depth values generated outside of the expected depth
+                range are clamped to that range. No, really, that's <emphasis>all</emphasis> it
+                says.</para>
+        </note>
     </section>
     <section>
         <?dbhtml filename="Tut05 In Review.html" ?>
                 </glossdef>
             </glossentry>
             <glossentry>
+                <glossterm>early depth test, early-z</glossterm>
+                <glossdef>
+                    <para/>
+                </glossdef>
+            </glossentry>
+            <glossentry>
                 <glossterm>z-fighting</glossterm>
                 <glossdef>
                     <para/>
                 </glossdef>
             </glossentry>
             <glossentry>
+                <glossterm>homogeneous coordinate system</glossterm>
+                <glossdef>
+                    <para/>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>clipping</glossterm>
+                <glossdef>
+                    <para/>
+                </glossdef>
+            </glossentry>
+            <glossentry>
                 <glossterm>depth clamping</glossterm>
                 <glossdef>
                     <para/>
Add a comment to this file

Documents/Positioning/Vertex Clipping.png

Added
New image

Documents/Tutorial Documents.xpr

             <file name="Basics/Tutorial%2001.xml"/>
             <file name="Basics/Tutorial%2002.xml"/>
         </folder>
+        <folder name="css">
+            <file name="chunked.css"/>
+            <file name="standard.css"/>
+        </folder>
         <folder name="Positioning">
             <file name="Positioning/Tutorial%2003.xml"/>
             <file name="Positioning/Tutorial%2004.xml"/>
             <file name="Positioning/Tutorial%2005.xml"/>
         </folder>
         <file name="Building%20the%20Tutorials.xml"/>
-        <file name="chunked.css"/>
         <file name="cssDoc.txt"/>
         <file name="Outline.xml"/>
-        <file name="standard.css"/>
         <file name="Tutorials.xml"/>
     </projectTree>
 </project>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.