Commits

Jason McKesson committed 9e0f8fd

Started adjusting Tutorial 5.

Comments (0)

Files changed (3)

Documents/Positioning/Tutorial 05.xml

                 <title>Mild Overlap</title>
                 <mediaobject>
                     <imageobject>
-                        <imagedata fileref="Depth%20Buffering%20Mild%20Overlap.png" contentwidth="3in"/>
+                        <imagedata fileref="Depth%20Buffering%20Mild%20Overlap.png"
+                            contentwidth="3in"/>
                     </imageobject>
                 </mediaobject>
             </figure>
                 <title>Major Overlap</title>
                 <mediaobject>
                     <imageobject>
-                        <imagedata fileref="Depth%20Buffering%20Major%20Overlap.png" contentwidth="3in"/>
+                        <imagedata fileref="Depth%20Buffering%20Major%20Overlap.png"
+                            contentwidth="3in"/>
                     </imageobject>
                 </mediaobject>
             </figure>
             <para>No amount of depth sorting will help with <emphasis>that</emphasis>.</para>
-        </section>
-        <section>
-            <title>Fragments and Depth</title>
-            <para>Way back in the <link linkend="tut_00">introduction</link>, we said that part of
-                the fragment's data was the window-space position of the fragment. This is a 3D
-                coordinate; the Z value is naturally what would be written to the depth buffer. We
-                saw <link linkend="FragPosition">later</link> that the built-in input variable
-                    <varname>gl_FragCoord</varname> holds this position;
-                    <literal>gl_FragCoord.z</literal> is the window-space depth of the fragment, as
-                generated by OpenGL.</para>
-            <para>Part of the job of the fragment shader is to generate output colors for the output
-                color images. Another part of the job of the fragment shader is to generate the
-                output <emphasis>depth</emphasis> of the fragment.</para>
-            <para>If that's true, then how can we use the same fragment shader as we did before
-                turning on depth buffering? The default behavior of OpenGL is, if a fragment shader
-                does <emphasis>not</emphasis> write to the output depth, then simply take the
-                generated window-space depth as the final depth of the fragment.</para>
-            <para>Oh, you could do this manually. We could add the following statement to the
-                    <function>main</function> function of our fragment shader:</para>
-            <programlisting>gl_FragDepth = gl_FragCoord.z</programlisting>
-            <para>This is, in terms of behavior a noop; it does nothing OpenGL wouldn't have done
-                itself. However, in terms of <emphasis>performance</emphasis>, this is a drastic
-                change.</para>
-            <para>The reason fragment shaders aren't required to have this line in all of them is to
-                allow for certain optimizations. If the OpenGL driver can see that you do not set
-                    <varname>gl_FragDepth</varname> anywhere in the fragment shader, then it can
-                dramatically improve performance in certain cases.</para>
-            <para>If the driver knows that the output fragment depth is the same as the generated
-                one, it can do the whole depth test <emphasis>before</emphasis> executing the
-                fragment shader. This is called <glossterm>early depth test</glossterm> or
-                    <glossterm>early-z</glossterm>. This means that it can discard fragments
-                    <emphasis>before</emphasis> wasting precious time executing potentially complex
-                fragment shaders. Indeed, most hardware nowadays has complicated early z culling
-                hardware that can discard multiple fragments with one test.</para>
-            <para>The moment your fragment shader has to write anything to
-                    <varname>gl_FragDepth</varname>, all of those optimizations go away. So
-                generally, you should only write a depth value if you <emphasis>really</emphasis>
-                need it.</para>
-        </section>
-        <section>
-            <title>Depth Precision</title>
-            <para>There is one other thing that needs to be discussed with regard to depth buffers:
-                precision.</para>
-            <para>In the previous tutorial, we saw that the transform from camera space to
-                normalized device coordinate (<acronym>NDC</acronym>), in 2D, looked like
-                this:</para>
-            <figure>
-                <title>2D Camera to NDC Space</title>
-                <mediaobject>
-                    <imageobject>
-                        <imagedata fileref="CameraToPerspective.svg" format="SVG" contentwidth="6in"/>
-                    </imageobject>
-                </mediaobject>
-            </figure>
-            <para>This transformation used a special function to calculate the depth, one designed
-                to keep lines linear after performing the perspective divide. While it does do this,
-                it has a number of other effects. In particular, it changes the Z spacing between
-                points.</para>
-            <para>We can see that there is a lot of spacing between the points in NDC space at the
-                bottom (close to the view) and much less at the top (far from the view). The
-                third-nearest point to the viewer in camera space (Z = -1.75) maps to a point well
-                past halfway to the camera in NDC space.</para>
-            <para>Let us take just the front half of NDC space as an example. In NDC space, this is
-                the range [-1, 0]. In camera space, the exact range depends on the camera zNear and
-                zFar values. In the above example where the camera range is [-1, -3], the range that
-                maps to the front half of NDC space is [-1, -1.5], only a quarter of the
-                range.</para>
-            <para>The larger the difference between N and F, the <emphasis>smaller</emphasis> the
-                half-space. If the camera range goes from [-500, -1000], then half of NDC space
-                represents the range from [-500, -666.67]. This is 33.3% of the camera space range
-                mapping to 50% of the NDC range. However, if the camera range goes from [-1, -1000],
-                fully <emphasis>half</emphasis> of NDC space will represent only [-1, -1.998] in
-                camera space; less than 0.1% of the range.</para>
-            <para>This has real consequences for the precision of your depth buffer. Earlier, we
-                said that the depth buffer stores floating-point values. While this is conceptually
-                true, most depth buffers actually use fixed-point values and convert them into
-                floating-point values automatically. If you have a 16-bit depth buffer, you have
-                65536 possible depth values. Half of this is 32768 depth values, equivalent to a
-                15-bit depth buffer.</para>
-            <para>Even so, the difference between 16 bits and 15 bits is not that great. Instead of
-                looking at half of NDC space, let's look at half of the
-                    <emphasis>precision.</emphasis> So, what is the camera-space range at which you
-                lose half of your precision?</para>
-            <para>For a 16-bit depth buffer, half-precision is 8 bits. In fixed-point, if the near
-                value is 0 and the far is 65535 (representing 1.0), then half-precision happens when
-                the first 8 bits are all ones. This value is 65280 (65535 - 255). As a
-                floating-point value, this represents a value of ~0.996. In NDC space, this is a Z
-                value of ~0.992.</para>
-            <para>So what is the camera-space range at which you lose half precision? If the camera
-                depth range is [-500, -1000], then you get the half precision range of [-500, -996],
-                which is over 99% of the camera-space range. What about [-1, -1000]? This comes out
-                to [-1, -200], which is 20% of the range.</para>
-            <para>Before we can assess the consequences of this, we must first discuss what the
-                consequences are for low depth precision. Remember that the depth buffer exists to
-                allow each fragment to have a depth value, such that if an incoming fragment is
-                behind the already existing value, it is not written to the image.</para>
-            <para>If the available precision is too small, then what happens is that part of one
-                triangle will start showing through triangles that are supposed to be farther away.
-                If the camera or these objects are in motion, horrible flickering artifacts can be
-                seen. This is called <glossterm>z-fighting,</glossterm> as multiple objects appear
-                to be fighting each other when animated.</para>
-            <!--TODO: Show an image of z-fighting.-->
-            <para>Fortunately, the days of 16-bit depth buffers are long over; the modern standard
-                is (and has been for years now) 24-bits of precision. Half-precision of 24-bits is
-                12-bits, which is not too far from a 16-bit depth buffer in and of itself. If you
-                use a 24-bit depth buffer, it turns out that you lose half precision on a [-1,
-                -1000] camera range at [-1, -891], which is 89% of the range. At a 1:10,000 ratio,
-                you have 45% of the camera range in most of the precision. At 1:100,000 this drops
-                to ~7%, and at 1:1,000,000 it is down to 0.8%.</para>
-            <para>The most important question to be asked is this: is this bad? Not really.</para>
-            <para>Let's take the 1:100,000 example. 7% may not sound like a lot, but this is still a
-                range of [-1, -7573]. If these units are conceptually in inches, then you've got
-                most of the precision sitting in the first 600+ feet.</para>
-            <para>And let's see what happens if we move the zNear plane forward just
-                    <emphasis>four</emphasis> inches, to 5:100,000. The percentage jumps to almost
-                30%, with half-precision happening at over 29,000 inches; that's a good half-mile.
-                Increase the zNear to a mere 10 inches, and you have the equivalent of 1:10,000
-                again: 45%. 10 inches may seem like a lot, but that's still less than a foot away
-                from the eye. Depending on what you are rendering, this may be a perfectly
-                legitimate trade-off.</para>
-            <para>What this teaches us is that the absolute numbers don't matter: it is the ratio of
-                zNear to zFar that dictates where you lose precision. 0.1:1000 is just as bad as
-                1:10,000. So push the zNear distance forward as far as you can. What happens if you
-                push it too far? That's the next section.</para>
-            <section>
-                <title>Large Camera Depth Ranges</title>
-                <para>You may ask what to do if you really need a wide camera depth range, like
-                    1:4,000,000 or something, where each unit represents an inch or something
-                    equally small.</para>
-                <para>First, it needs to be pointed out that a 24-bit depth buffer only goes from 0
-                    to 16,777,215. Even if the depth values were evenly distributed, you would only
-                    get a resolution of 1/4th of an inch.</para>
-                <para>Second, this range is starting to come perilously close to the issues with
-                        <emphasis>floating-point</emphasis> precision. Yes, this still provides a
-                    lot of precision, but remember: the depth range is for the current view. This
-                    means that your world is probably much larger than this. If you're getting
-                    numbers that large, you may need to start worrying about floating-point
-                    precision error in computing these positions. There are certainly ways around it
-                    (and we will discuss some later), but if you need a camera-space range that
-                    large, you may run into other problems at the same time.</para>
-                <para>Third, most applications render lower-quality models when objects are far
-                    away. This is mainly for the purpose of focusing performance where the user
-                    needs it: the things closest to him. If some of the z-fighting comes from
-                    overlap within a model, a lower-detail model without those overlapping parts can
-                    help reduce z-fighting as well.</para>
-                <para>Fourth, you usually really, <emphasis>really</emphasis> need that precision
-                    up-close. If you think z-fighting looks bad when it happens with a distant
-                    object, imagine how bad it will look if it's up in your face. Even if you could
-                    make the z-values linear, it could cause problems in near objects.</para>
-                <para>Fifth, if you really need a camera range this large, you can play some tricks
-                    with the depth range. But only do this if you actually do get z-fighting; don't
-                    simply do it because you have a large camera range.</para>
-                <para>The camera range defines how the perspective matrix transforms the Z to
-                    clip-space and therefore NDC space. The <emphasis>depth</emphasis> range defines
-                    what part of the [0, 1] range of window coordinates that the NDC depth maps to.
-                    So you can draw the front half of your scene into the [0, 0.5] depth range with
-                    a camera range like [-1, -2,000,000]. Then, you can draw the back half of the
-                    scene in the [0.5, 1] depth range, with a camera range of [-2,000,000,
-                    -4,000,000]. Dividing it in half like this isn't very fair to your front
-                    objects, so it's more likely that you would want to use something like [-1,
-                    -10,000] for the front half and [-10,000, -4,000,000] for the second. Each of
-                    these would still map to half of the depth range.</para>
-                <para>Objects that lie on the border between the split would have to be rendered
-                    into both, just to make sure their depth values show up properly.</para>
-            </section>
+            <tip>
+                <title>Fragments and Depth</title>
+                <para>Way back in the <link linkend="tut_00">introduction</link>, we said that part
+                    of the fragment's data was the window-space position of the fragment. This is a
+                    3D coordinate; the Z value is naturally what would be written to the depth
+                    buffer. We saw <link linkend="FragPosition">later</link> that the built-in input
+                    variable <varname>gl_FragCoord</varname> holds this position;
+                        <literal>gl_FragCoord.z</literal> is the window-space depth of the fragment,
+                    as generated by OpenGL.</para>
+                <para>Part of the job of the fragment shader is to generate output colors for the
+                    output color images. Another part of the job of the fragment shader is to
+                    generate the output <emphasis>depth</emphasis> of the fragment.</para>
+                <para>If that's true, then how can we use the same fragment shader as we did before
+                    turning on depth buffering? The default behavior of OpenGL is, if a fragment
+                    shader does <emphasis>not</emphasis> write to the output depth, then simply take
+                    the generated window-space depth as the final depth of the fragment.</para>
+                <para>Oh, you could do this manually. We could add the following statement to the
+                        <function>main</function> function of our fragment shader:</para>
+                <programlisting>gl_FragDepth = gl_FragCoord.z</programlisting>
+                <para>This is, in terms of behavior a noop; it does nothing OpenGL wouldn't have
+                    done itself. However, in terms of <emphasis>performance</emphasis>, this is a
+                    drastic change.</para>
+                <para>The reason fragment shaders aren't required to have this line in all of them
+                    is to allow for certain optimizations. If the OpenGL driver can see that you do
+                    not set <varname>gl_FragDepth</varname> anywhere in the fragment shader, then it
+                    can dramatically improve performance in certain cases.</para>
+                <para>If the driver knows that the output fragment depth is the same as the
+                    generated one, it can do the whole depth test <emphasis>before</emphasis>
+                    executing the fragment shader. This is called <glossterm>early depth
+                        test</glossterm> or <glossterm>early-z</glossterm>. This means that it can
+                    discard fragments <emphasis>before</emphasis> wasting precious time executing
+                    potentially complex fragment shaders. Indeed, most hardware nowadays has
+                    complicated early z culling hardware that can discard multiple fragments with
+                    one test.</para>
+                <para>The moment your fragment shader has to write anything to
+                        <varname>gl_FragDepth</varname>, all of those optimizations go away. So
+                    generally, you should only write a depth value if you
+                        <emphasis>really</emphasis> need it.</para>
+            </tip>
         </section>
     </section>
     <section>
         </note>
     </section>
     <section>
+        <section>
+            <title>Depth Precision</title>
+            <para>There is one other thing that needs to be discussed with regard to depth buffers:
+                precision.</para>
+            <para>In the previous tutorial, we saw that the transform from camera space to
+                normalized device coordinate (<acronym>NDC</acronym>), in 2D, looked like
+                this:</para>
+            <figure>
+                <title>2D Camera to NDC Space</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="CameraToPerspective.svg" format="SVG" contentwidth="6in"
+                        />
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>This transformation used a special function to calculate the depth, one designed
+                to keep lines linear after performing the perspective divide. While it does do this,
+                it has a number of other effects. In particular, it changes the Z spacing between
+                points.</para>
+            <para>We can see that there is a lot of spacing between the points in NDC space at the
+                bottom (close to the view) and much less at the top (far from the view). The
+                third-nearest point to the viewer in camera space (Z = -1.75) maps to a point well
+                past halfway to the camera in NDC space.</para>
+            <para>Let us take just the front half of NDC space as an example. In NDC space, this is
+                the range [-1, 0]. In camera space, the exact range depends on the camera zNear and
+                zFar values. In the above example where the camera range is [-1, -3], the range that
+                maps to the front half of NDC space is [-1, -1.5], only a quarter of the
+                range.</para>
+            <para>The larger the difference between N and F, the <emphasis>smaller</emphasis> the
+                half-space. If the camera range goes from [-500, -1000], then half of NDC space
+                represents the range from [-500, -666.67]. This is 33.3% of the camera space range
+                mapping to 50% of the NDC range. However, if the camera range goes from [-1, -1000],
+                fully <emphasis>half</emphasis> of NDC space will represent only [-1, -1.998] in
+                camera space; less than 0.1% of the range.</para>
+            <para>This has real consequences for the precision of your depth buffer. Earlier, we
+                said that the depth buffer stores floating-point values. While this is conceptually
+                true, most depth buffers actually use fixed-point values and convert them into
+                floating-point values automatically. If you have a 16-bit depth buffer, you have
+                65536 possible depth values. Half of this is 32768 depth values, equivalent to a
+                15-bit depth buffer.</para>
+            <para>Even so, the difference between 16 bits and 15 bits is not that great. Instead of
+                looking at half of NDC space, let's look at half of the
+                    <emphasis>precision.</emphasis> So, what is the camera-space range at which you
+                lose half of your precision?</para>
+            <para>For a 16-bit depth buffer, half-precision is 8 bits. In fixed-point, if the near
+                value is 0 and the far is 65535 (representing 1.0), then half-precision happens when
+                the first 8 bits are all ones. This value is 65280 (65535 - 255). As a
+                floating-point value, this represents a value of ~0.996. In NDC space, this is a Z
+                value of ~0.992.</para>
+            <para>So what is the camera-space range at which you lose half precision? If the camera
+                depth range is [-500, -1000], then you get the half precision range of [-500, -996],
+                which is over 99% of the camera-space range. What about [-1, -1000]? This comes out
+                to [-1, -200], which is 20% of the range.</para>
+            <para>Before we can assess the consequences of this, we must first discuss what the
+                consequences are for low depth precision. Remember that the depth buffer exists to
+                allow each fragment to have a depth value, such that if an incoming fragment is
+                behind the already existing value, it is not written to the image.</para>
+            <para>If the available precision is too small, then what happens is that part of one
+                triangle will start showing through triangles that are supposed to be farther away.
+                If the camera or these objects are in motion, horrible flickering artifacts can be
+                seen. This is called <glossterm>z-fighting,</glossterm> as multiple objects appear
+                to be fighting each other when animated.</para>
+            <!--TODO: Show an image of z-fighting.-->
+            <para>Fortunately, the days of 16-bit depth buffers are long over; the modern standard
+                is (and has been for years now) 24-bits of precision. Half-precision of 24-bits is
+                12-bits, which is not too far from a 16-bit depth buffer in and of itself. If you
+                use a 24-bit depth buffer, it turns out that you lose half precision on a [-1,
+                -1000] camera range at [-1, -891], which is 89% of the range. At a 1:10,000 ratio,
+                you have 45% of the camera range in most of the precision. At 1:100,000 this drops
+                to ~7%, and at 1:1,000,000 it is down to 0.8%.</para>
+            <para>The most important question to be asked is this: is this bad? Not really.</para>
+            <para>Let's take the 1:100,000 example. 7% may not sound like a lot, but this is still a
+                range of [-1, -7573]. If these units are conceptually in inches, then you've got
+                most of the precision sitting in the first 600+ feet.</para>
+            <para>And let's see what happens if we move the zNear plane forward just
+                    <emphasis>four</emphasis> inches, to 5:100,000. The percentage jumps to almost
+                30%, with half-precision happening at over 29,000 inches; that's a good half-mile.
+                Increase the zNear to a mere 10 inches, and you have the equivalent of 1:10,000
+                again: 45%. 10 inches may seem like a lot, but that's still less than a foot away
+                from the eye. Depending on what you are rendering, this may be a perfectly
+                legitimate trade-off.</para>
+            <para>What this teaches us is that the absolute numbers don't matter: it is the ratio of
+                zNear to zFar that dictates where you lose precision. 0.1:1000 is just as bad as
+                1:10,000. So push the zNear distance forward as far as you can. What happens if you
+                push it too far? That's the next section.</para>
+            <section>
+                <title>Large Camera Depth Ranges</title>
+                <para>You may ask what to do if you really need a wide camera depth range, like
+                    1:4,000,000 or something, where each unit represents an inch or something
+                    equally small.</para>
+                <para>First, it needs to be pointed out that a 24-bit depth buffer only goes from 0
+                    to 16,777,215. Even if the depth values were evenly distributed, you would only
+                    get a resolution of 1/4th of an inch.</para>
+                <para>Second, this range is starting to come perilously close to the issues with
+                        <emphasis>floating-point</emphasis> precision. Yes, this still provides a
+                    lot of precision, but remember: the depth range is for the current view. This
+                    means that your world is probably much larger than this. If you're getting
+                    numbers that large, you may need to start worrying about floating-point
+                    precision error in computing these positions. There are certainly ways around it
+                    (and we will discuss some later), but if you need a camera-space range that
+                    large, you may run into other problems at the same time.</para>
+                <para>Third, most applications render lower-quality models when objects are far
+                    away. This is mainly for the purpose of focusing performance where the user
+                    needs it: the things closest to him. If some of the z-fighting comes from
+                    overlap within a model, a lower-detail model without those overlapping parts can
+                    help reduce z-fighting as well.</para>
+                <para>Fourth, you usually really, <emphasis>really</emphasis> need that precision
+                    up-close. If you think z-fighting looks bad when it happens with a distant
+                    object, imagine how bad it will look if it's up in your face. Even if you could
+                    make the z-values linear, it could cause problems in near objects.</para>
+                <para>Fifth, if you really need a camera range this large, you can play some tricks
+                    with the depth range. But only do this if you actually do get z-fighting; don't
+                    simply do it because you have a large camera range.</para>
+                <para>The camera range defines how the perspective matrix transforms the Z to
+                    clip-space and therefore NDC space. The <emphasis>depth</emphasis> range defines
+                    what part of the [0, 1] range of window coordinates that the NDC depth maps to.
+                    So you can draw the front half of your scene into the [0, 0.5] depth range with
+                    a camera range like [-1, -2,000,000]. Then, you can draw the back half of the
+                    scene in the [0.5, 1] depth range, with a camera range of [-2,000,000,
+                    -4,000,000]. Dividing it in half like this isn't very fair to your front
+                    objects, so it's more likely that you would want to use something like [-1,
+                    -10,000] for the front half and [-10,000, -4,000,000] for the second. Each of
+                    these would still map to half of the depth range.</para>
+                <para>Objects that lie on the border between the split would have to be rendered
+                    into both, just to make sure their depth values show up properly.</para>
+            </section>
+        </section>
+    </section>
+    <section>
         <?dbhtml filename="Tut05 In Review.html" ?>
         <title>In Review</title>
         <para>In this tutorial, you have learned about the following:</para>
                     the camera.</para>
             </listitem>
             <listitem>
-                <para>Clipping holes can be repaired, so long as there is no overlap, by activating
-                    depth clamping.</para>
+                <para>Clipping holes can be repaired by activating depth clamping, so long as there
+                    is no overlap.</para>
+            </listitem>
+            <listitem>
+                <para>Depth buffers have a finite precision, and this can cause z-fighting.
+                    Z-fighting can be repaired by moving the Camera zNear forward, or moving objects
+                    farther apart.</para>
             </listitem>
         </itemizedlist>
         <section>

Documents/cssDoc.txt

     div.example-contents: Stores everything but the title of the example.
     br.example-break
     
+    div.note: A short notation.
+    div.tip: A longer notation.
+    

Tut 05 Objects in Depth/DepthFighting.cpp

 const int numberOfVertices = 8;
 
 #define GREEN_COLOR 0.0f, 1.0f, 0.0f, 1.0f
-#define BLUE_COLOR 	0.0f, 0.5f, 0.0f, 1.0f
+#define BLUE_COLOR 	0.0f, 0.0f, 1.0f, 1.0f
 #define RED_COLOR 1.0f, 0.0f, 0.0f, 1.0f
 
 const float Z_OFFSET = 0.5f;
 const float vertexData[] = {
 	//Front face positions
 	-400.0f,		 400.0f,			0.0f,
-	 400.0f,		 400.0f,			1.0f,
-	 400.0f,		-400.0f,			1.0f,
+	 400.0f,		 400.0f,			0.0f,
+	 400.0f,		-400.0f,			0.0f,
 	-400.0f,		-400.0f,			0.0f,
 
 	//Rear face positions
 	-200.0f,		 600.0f,			-Z_OFFSET,
-	 600.0f,		 600.0f,			1.0f - Z_OFFSET,
-	 600.0f,		-200.0f,			1.0f - Z_OFFSET,
+	 600.0f,		 600.0f,			0.0f - Z_OFFSET,
+	 600.0f,		-200.0f,			0.0f - Z_OFFSET,
 	-200.0f,		-200.0f,			-Z_OFFSET,
 
 	//Front face colors.
 
 	float fCurrTimeThroughLoop = fmodf(fElapsedTime, fLoopDuration);
 
-	return cosf(fCurrTimeThroughLoop * fScale) * 300.0f - 2900.0f;
+	float fRet = cosf(fCurrTimeThroughLoop * fScale) * 500.0f - 2700.0f;
+
+	return fRet;
 }
 
 
 	case 27:
 		glutLeaveMainLoop();
 		break;
+	case 32:
+		{
+			float fValue = CalcZOFfset();
+			printf("%f\n", fValue);
+		}
+		break;
 	}
 }