gltut / Documents / Further Study.xml

<?xml version="1.0" encoding="UTF-8"?>
<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
<appendix xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
    <?dbhtml filename="Further Study.html" ?>
    <title>Further Study</title>
    <para>This book provides a firm foundation for you to get started in your adventures as a
        graphics programmer. However, it ultimately cannot cover everything. What follows will be a
        general overview of other topics that you should investigate now that you have a general
        understanding of how graphics work.</para>
    <section>
        <?dbhtml filename="Further Study Debugging.html" ?>
        <title>Debugging</title>
        <para>This book provides functioning code to solve various problems and implement a variety
            of effects. However, it does not talk about how to get code from a non-working state
            into a working one. That is, debugging.</para>
        <para>Debugging OpenGL code is very difficult. Frequently when there is a bug in graphics
            code, the result is a massively unhelpful blank screen. If the problem is localized to a
            single shader or state used to render an object, the result is a black object or general
            garbage. Compounding this problem is the fact that OpenGL has a lot of global state. One
            of the reasons this book will often bind objects, do something with them, and then
            unbind them, is to reduce the amount of state dependencies. It ensures that every object
            is rendered with a specific program, a set of textures, a certain VAO, etc. It may be
            slightly slower to do this, but for a simple application, getting it working is more
            important.</para>
        <para>Debugging shaders is even more problematic; there are no breakpoints or watches you
            can put on GLSL shaders. Fragment shaders offer the possibility of
                <function>printf</function>-style debugging: one can always write some values to the
            framebuffer and see something. Vertex or other shader stages require passing their data
            through another stage before the outcome can be seen. And even then, it is an
            interpolated version.</para>
        <para>Because of the difficulty in debugging, the general tactics for doing so revolve
            around bug prevention rather than bug finding. Therefore, the most important tactic for
            debugging is this: always start with working code. Begin development with something that
            actually functions, even if it is not drawing what you ultimately intend for it to. Once
            you have working code, you can change it to render what you need it to.</para>
        <para>The second tip is to minimize the amount of code/data that could be causing any
            problem you encounter. This means making small changes in a working application and
            immediately testing to see if they work or not. If they do not, then it must be because
            of those small changes. If you make big changes, then the size of the code/data you have
            to look through is much larger. The more code you have to debug, the harder it is to do
            so effectively.</para>
        <para>Along with the last tip is to use a distributed version control system and check in
            your code often, preferably after each small change. This will allow you to revert any
            changes that do not work, as well as see the actual differences between the last working
            version and the now non-functional version. This will save you from inadvertent
            keystrokes and the like.</para>
        <para>The next step is to avail yourself of debugging tools, where they exist for your
            platform(s) of interest. OpenGL itself can help here. The OpenGL specification defines
            what functions should do in the case of malformed input. Specifically, they will place
            errors into the OpenGL error queue. After every OpenGL function call, you may want to
            check to see if an error has been added to the queue. This is done with code as
            follows:</para>
        <programlisting language="cpp">for(GLenum currError = glGetError(); currError != GL_NO_ERROR; currError = glGetError())
{
  //Do something with `currError`.
}</programlisting>
        <para>It would be very tedious to put this after every function. But since the errors are
            all stored in a queue, they are not associated with the actual function that caused the
            error. If <function>glGetError</function> is not called frequently, it becomes very
            difficult to know where any particular error came from. Therefore, there is a special
            OpenGL extension specifically for aiding in debugging: ARB_debug_output. This extension
            is only available when creating an OpenGL context with a special debug flag. The
            framework used here automatically uses this extension in debug builds.</para>
        <para>The debug output extension allows the user to register a function that will be called
            whenever errors happen. It can also be called when less erroneous circumstances occur;
            this depends on the OpenGL implementation. When set up for synchronous error messages,
            the callback will be called before the function that created the error returned. So it
            is possible to breakpoint inside the callback function and see exactly which function
            caused it.</para>
        <para>Setting up the callback is somewhat complex. The Unofficial OpenGL SDK offers, as part
            of its GL Utilities sub-library, a registration function for setting up the debug output
            properly. Also, the framework for this book's code offers similar functionality. It is
            located in <filename>framework/framework.cpp</filename>.</para>
        <para>There are alternatives for catching OpenGL errors. The utility glIntercept (it works
            in Linux and Windows, but the Linux port is slightly iffy) is capable of hooking into
            OpenGL without any code modifications and tracking every OpenGL function call. It can
            dump every OpenGL call to a file, as well as the given parameters. This is very useful
            for debugging, as it tells you more about what may have caused certain error or visual
            glitches. It also automatically checks for errors after every function call, alleviating
            the need to do so manually. Any errors are logged with the function that produced
            them.</para>
        <para>The gDebugger tool, Windows only, offers similar features, though unlike glIntercept,
            it is not open-source.</para>
    </section>
    <section>
        <?dbhtml filename="Further Study Topics.html" ?>
        <title>Topics of Interest</title>
        <para>This book should provide a firm foundation for understanding graphics development. But
            there are many subjects that are not covered in this book which are also important in
            rendering. Here is a list of topics that you should investigate, with a quick
            introduction to the basic concepts.</para>
        <para>This list is not intended to be a comprehensive tour of all interesting graphical
            effects. It is simply an introduction to a few concepts that you should spend some time
            investigating. There may be others not on this list that are worthy of your time.</para>
        <formalpara>
            <title>Vertex Weighting</title>
            <para>All of our meshes have had fairly simple linear transformations applied to them
                (outside of the perspective projection). However, the mesh for a human or human-like
                character needs to be able to deform based on animations. The transformation for the
                upper arm and the lower arm are different, but they both affect vertices at the
                elbow in some way.</para>
        </formalpara>
        <para>The system for dealing with this is called vertex weighting or skinning (note:
                <quote>skinning</quote>, as a term, has also been applied to mapping a texture on an
            object. Be aware of that when doing Internet searches). A character is made of a
            hierarchy of transformations; each transform is called a bone. Vertices are weighted to
            particular bones. Where it gets interesting is that vertices can have weights to
            multiple bones. This means that the vertex's final position is determined by a weighted
            combination of two (or more) transforms.</para>
        <para>Vertex shaders generally do this by taking an array of matrices as a uniform block.
            Each matrix is a bone. Each vertex contains a <type>vec4</type> which contains up to 4
            indices in the bone matrix array, and another <type>vec4</type> that contains the weight
            to use with the corresponding bone. The vertex is multiplied by each of the four
            matrices, and the results are averaged together.</para>
        <para>This process is made more complicated by normals and the tangent-space basis necessary
            for bump mapping. And it is complicated even further by a technique called dual
            quaternion skinning. This is done primarily to avoid issues with certain bones rotating
            relative to one another. It prevents vertices from pinching inwards when the wrist bone
            is rotated 180 degrees from the forearm.</para>
        <formalpara>
            <title>BRDFs</title>
            <para>The term Bidirectional Reflectance Distribution Function (<acronym>BRDF</acronym>)
                refers to a special kind of function. It is a function of two directions: the
                direction towards the incident light and the direction towards the viewer, both of
                which are specified relative to the surface normal. This last part makes the BRDF
                independent of the surface normal, as it is an implicit parameter in the equation.
                The output of the BRDF is the percentage of light from the light source that is
                reflected along the view direction. Thus, the output of the BRDF, when multiplied by
                the incident light intensity, produces the reflected light intensity towards the
                viewer.</para>
        </formalpara>
        <para>By all rights, this sounds like a lighting equation. And it is. Indeed, every lighting
            equation in this book can be expressed in the form of a BRDF. One of the things that
            make BRDFs as a class of equations interesting is that you can actually take a physical
            object into a lab, perform a series of tests on it, and produce a BRDF table out of
            them. This BRDF table, typically expressed as a texture, can then be directly used by a
            shader to show how a surface in the real world actually behaves under lighting
            conditions. This can provide much more accurate results than using lighting models as we
            have done here.</para>
        <formalpara>
            <title>Scalable Alpha Testing</title>
            <para>We have seen how alpha-test works via <literal>discard</literal>: a fragment is
                culled if its alpha is beneath a certain threshold. However, when magnifying a
                texture providing that alpha, it can create an unfortunate stair-step effect along
                the border between the culled and unculled part. It is possible to avoid these
                artifacts, if one preprocesses the texture correctly.</para>
        </formalpara>
        <para>Valve software's Chris Green wrote a paper entitled <citetitle>Improved Alpha-Tested
                Magnification for Vector Textures and Special Effects</citetitle>. This paper
            describes a way to take a high-resolution version of the alpha and convert it into a
            distance field. Since distances interpolate much better in a spatial domain like images,
            using distance-based culling instead of edge-based culling produces a much smoother
            result even with a heavily magnified image.</para>
        <para>The depth field can also be used to do other effects, like draw outlines around
            objects or drop shadows. And the best part is that it is a very inexpensive technique to
            implement. It requires some up-front preprocessing, but what you get in the end is quite
            powerful and very performance-friendly.</para>
        <formalpara>
            <title>Screen-Space Ambient Occlusion</title>
            <para>One of the many difficult issues when doing rasterization-based rendering is
                dealing with interreflection. That is, light reflected from one object that reflects
                off of another. We covered this by providing a single ambient light as something of
                a hack. A useful one, but a hack nonetheless.</para>
        </formalpara>
        <para>Screen-space ambient occlusion (<acronym>SSAO</acronym>) is the term given to a hacky
            modification of this already hacky concept. The idea works like this. If an object has
            an interior corner, then the amount of interreflected light for the pixels around that
            interior corner will be less than the general level of interreflection. This is a
            generally true statement. What SSAO does is find all of those corners, in screen-space,
            and decreases the ambient light intensity for them proportionately.</para>
        <para>Doing this in screen space requires access to the screen space depth for each pixel.
            So it combines very nicely with deferred rendering techniques. Indeed, it can simply be
            folded directly into the ambient lighting pass of deferred rendering, though getting it
            to perform reasonably fast is the biggest challenge. But the results can look good
            enough to be worth the effort.</para>
        <formalpara>
            <title>Light Scattering</title>
            <para>When light passes through the atmosphere, it can be absorbed and reflected by the
                atmosphere itself. After all, this is why the sky is blue: because it absorbs some
                of the light coming from the sun, tinting the sunlight blue. Clouds are also a form
                of this: light that hits the water vapor that comprises clouds is reflected around
                and scattered. Thin clouds appear white because much of the light still makes it
                through. Thick clouds appear dark because they scatter and absorb so much light that
                not much passes through them.</para>
        </formalpara>
        <para>All of these are atmospheric light scattering effects. The most common in real-time
            scenarios is fog, which meteorologically speaking, is simply a low-lying cloud. Ground
            fog is commonly approximated in graphics by applying a change to the intensity of the
            light reflected from a surface towards the viewer. The farther the light travels, the
            more of it is absorbed and reflected, converting it into the fog's color. So objects
            that are extremely distant from the viewer would be indistinguishable from the fog
            itself. The thickness of the fog is based on the distance light has to travel before it
            becomes just more fog.</para>
        <para>Fog can also be volumetric, localized in a specific region in space. This is often
            done to create the effect of a room full of steam, smoke, or other particulate aerosols.
            Volumetric fog is much more complex to implement than distance-based fog. This is
            complicated even more by objects that have to move through the fog region.</para>
        <para>Fog system deal with the light reflected from a surface to the viewer. Generalized
            light scattering systems deal with light from a light source that is scattered through
            fog. Think about car headlights in a fog: you can see the beam reflecting off of the fog
            itself. That is an entirely different can of worms and a general implementation is very
            difficult to pull off. Specific implementations, sometimes called <quote>God
                rays</quote> for the effect of strong sunlight on dust particles in a dark room, can
            provide some form of this. But they generally have to be special cased for every
            occurrence, rather than a generalized technique that can be applied.</para>
        <formalpara>
            <title>Non-Photorealistic Rendering</title>
            <para>Talking about non-photorealistic rendering (<acronym>NPR</acronym>) as one thing
                is like talking about non-Elephant biology as one thing. Photorealism may have the
                majority of the research effort in it, but the depth of non-photorealistic
                possibilities with modern hardware is extensive.</para>
        </formalpara>
        <para>These techniques often extend beyond mere rendering, from how textures are created and
            what they store, to exaggerated models, to various other things. Once you leave the
            comfort of approximately realistic lighting models, all bets are off.</para>
        <para>In terms of just the rendering part, the most well-known NPR technique is probably
            cartoon rendering, also known as cel shading. The idea with realistic lighting is to
            light a curved object so that it appears curved. With cel shading, the idea is often to
            light a curved object so that it appears <emphasis>flat</emphasis>. Or at least, so that
            it approximates one of the many different styles of cel animation, some of which are
            more flat than others. This generally means that light has only a few intensities: on,
            perhaps a slightly less on, and off. This creates a sharp highlight edge in the model,
            which can give the appearance of curvature without a full gradient of intensity.</para>
        <para>Coupled with cartoon rendering is some form of outline rendering. This is a bit more
            difficult to pull off in an aesthetically pleasing way. When an artist is drawing cel
            animation, they have the ability to fudge things in arbitrary ways to achieve the best
            result. Computers have to use an algorithm, which is more likely to be a compromise than
            a perfect solution for every case. What looks good for outlines in one case may not work
            in another. So testing the various outlining techniques is vital for pulling off a
            convincing effect.</para>
        <para>Other NPR techniques include drawing objects that look like pencil sketches, which
            require more texture work than rendering system work. Some find ways to make what could
            have been a photorealistic rendering look like an oil painting of some form, or in some
            cases, the glossy colors of a comic book. And so on. NPR is limited only by the graphics
            programmer's imagination. And the cleverness of said programmer to find a way to make it
            work, of course.</para>
    </section>
</appendix>
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.