Jason McKesson avatar Jason McKesson committed 5851bf4

Reorganizing document files. Part labels are now first-class.

Comments (0)

Files changed (12)

Documents/Basics/Core Graphics.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<article xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="core_graphics">
+</article>

Documents/Basics/Tutorial 00.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xml:id="tut_00" xmlns="http://docbook.org/ns/docbook"
+    xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
+    version="5.0">
+    <title>Introduction</title>
+    <para>Unlike most of the tutorials, this tutorial is purely text. There is no source code or
+        project associated with this tutorial.</para>
+    <para>Here, we will be discussing graphical rendering theory and OpenGL. This serves as a primer
+        to the rest of the tutorials.</para>
+    <section>
+        <title>Graphics and Rendering</title>
+        <para>These tutorials are for users with any knowledge of graphics or none. As such, there
+            is some basic background information you need to know before we can start looking at
+            actual OpenGL code.</para>
+        <para>Everything you see on your screen, even the text you are reading right now (assuming
+            you are reading this on an electronic display device, rather than a printout) is simply
+            a two-dimensional array of pixels. If you take a screenshot of something on your screen,
+            and blow it up, it will look very blocky.</para>
+        <!--TODO: Add an image of blocky pixels here.-->
+        <para>Each of these blocks is a <glossterm>pixel</glossterm>. The word <quote>pixel</quote>
+            is derived from the term <quote><acronym>Pic</acronym>ture
+                <acronym>El</acronym>ement</quote>. Every pixel on your screen has a particular
+            color. A two-dimensional array of pixels is called an
+            <glossterm>image</glossterm>.</para>
+        <para>The purpose of graphics of any kind is therefore to determine what color to put in
+            what pixels. This determination is what makes text look like text, windows look like
+            windows, and so forth.</para>
+        <para>Since all graphics are just a two-dimensional array of pixels, how does 3D work? 3D
+            graphics is thus a system of producing colors for pixels that convince you that the
+            scene you are looking at is a 3D world rather than a 2D image. The process of converting
+            a 3D world into a 2D image of that world is called
+            <glossterm>rendering.</glossterm></para>
+        <para>There are several methods of rendering a 3D world. The process used by real-time
+            graphics hardware, such as that found in your computer, involves a very great deal of
+            fakery. This process is called <glossterm>rasterization,</glossterm> and a rendering
+            system that uses rasterization is called a <glossterm>rasterizer.</glossterm></para>
+        <para>In rasterizers, all objects that you see are empty shells. There are techniques that
+            are used to allow you to cut open these empty shells, but this simply replaces part of
+            the shell with another shell that shows what the inside looks like. Everything is a
+            shell.</para>
+        <para>All of these shells are made of triangles. Even surfaces that appear to be round are
+            merely triangles if you look closely enough. There are techniques that generate more
+            triangles for objects that appear closer or larger, so that the viewer can almost never
+            see the faceted silhouette of the object. But they are always made of triangles.</para>
+        <note>
+            <para>Some rasterizers use planar quadrilaterals: four-sided objects, where all of the
+                lines lie in the same plane. One of the reasons that graphics hardware always uses
+                triangles is that all of the lines of triangles are guaranteed to be in the same
+                plane.</para>
+        </note>
+        <para>Objects made from triangles are often called <glossterm>geometry</glossterm>, a
+                <glossterm>model</glossterm> or a <glossterm>mesh</glossterm>; these terms are used
+            interchangeably.</para>
+        <para>The process of rasterization has several phases. These phases are ordered into a
+            pipeline, where triangles enter from the top and a 2D image is filled in at the bottom.
+            This is one of the reasons why rasterization is so amenable to hardware acceleration: it
+            operates on each triangle one at a time, in a specific order.</para>
+        <para>This also means that the order in which meshes are submitted to the rasterizer can
+            affect its output.</para>
+        <para>OpenGL is an API for accessing a hardware-based rasterizer. As such, it conforms to
+            the model for rasterizers. A rasterizer receives a sequence of triangles from the user,
+            performs operations on them, and writes pixels based on this triangle data. This is a
+            simplification of how rasterization works in OpenGL, but it is useful for our
+            purposes.</para>
+        <formalpara>
+            <title>Triangles and Vertices</title>
+            <para>Triangles consist of 3 vertices. A vertex consists of a collection of data. For
+                the sake of simplicity (we will expand upon this later), let us say that this data
+                must contain a point in three dimensional space. Any 3 points that are not on the
+                same line create a triangle, so the smallest information for a triangle consists of
+                3 three-dimensional points.</para>
+        </formalpara>
+        <para>A point in 3D space is defined by 3 numbers or coordinates. An X coordinate, a Y
+            coordinate, and a Z coordinate. These are commonly written with parenthesis, as in (X,
+            Y, Z).</para>
+        <section>
+            <title>Rasterization Overview</title>
+            <para>The rasterization pipeline, particularly for modern hardware, is very complex.
+                This is a very simplified overview of this pipeline. It is necessary to have a
+                simple understanding of the pipeline before we look at the details of rendering
+                things with OpenGL. Those details can be overwhelming without a high level
+                overview.</para>
+            <formalpara>
+                <title>Clip Space Transformation</title>
+                <para>The first phase of rasterization is to transform the vertices of each triangle
+                    into a certain volume of space. Everything within this volume will be rendered
+                    to the output image, and everything that falls outside of this region will not
+                    be. This region corresponds to the view of the world that the user wants to
+                    render, to some degree.</para>
+            </formalpara>
+            <para>The volume that the triangle is transformed into is called, in OpenGL parlance,
+                    <glossterm>clip space</glossterm>. The positions of a vertex of a triangle in
+                clip space are called <glossterm>clip coordinates.</glossterm></para>
+            <para>Clip coordinates are a little different from regular positions. A position in 3D
+                space has 3 coordinates. A position in clip space has <emphasis>four</emphasis>
+                coordinates. The first three are the usual X, Y, Z positions; the fourth is called
+                W. This last coordinate actually defines the extents of clip space is for this
+                vertex.</para>
+            <para>Clip space can actually be different for different vertices. It is a region of 3D
+                space on the range [-W, W] in each of the X, Y, and Z directions. So vertices with a
+                different W coordinate are in a different clip space cube from other vertices. Since
+                each vertex can have an independent W component, each vertex of a triangle exists in
+                its own clip space.</para>
+            <para>In clip space, the positive X direction is to the right, the positive Y direction
+                is up, and the positive Z direction is away from the viewer.</para>
+            <!--TODO: Add an image of clip space here.-->
+            <para>The process of transforming vertices into clip space is quite arbitrary. OpenGL
+                provides a lot of flexibility in this step. We will cover this step in detail
+                throughout the tutorials.</para>
+            <para>Because clip space is the visible transformed version of the world, any triangles
+                that fall outside of this region are discarded. Any triangles that are partially
+                outside of this region undergo a process called <glossterm>clipping.</glossterm>
+                This breaks the triangle apart into a number of smaller triangles, such that the
+                smaller triangles cover the area within clip space. Hence the name <quote>clip
+                    space.</quote></para>
+            <formalpara>
+                <title>Normalized Coordinates</title>
+                <para>Clip space is interesting, but inconvenient. The extent of this space is
+                    different for each vertex, which makes visualizing a triangle rather difficult.
+                    Therefore, clip space is transformed into a more reasonable coordinate space:
+                        <glossterm>normalized device coordinates</glossterm>.</para>
+            </formalpara>
+            <para>This process is very simple. The X, Y, and Z of each vertex's position is divided
+                by W to get normalized device coordinates. That is all.</para>
+            <para>Therefore, the space of normalized device coordinates is essentially just clip
+                space, except that the range of X, Y and Z are [-1, 1]. The directions are all the
+                same. The division by W is an important part of projecting 3D triangles onto 2D
+                images, but we will cover that in a future tutorial.</para>
+            <formalpara>
+                <title>Window Transformation</title>
+                <para>The next phase of rasterization is to transform the vertices of each triangle
+                    again. This time, they are converted from normalized device coordinates to
+                        <glossterm>window coordinates</glossterm>. As the name suggests, window
+                    coordinates are relative to the window that OpenGL is running within.</para>
+            </formalpara>
+            <para>Even though they refer to the window, they are still three dimensional
+                coordinates. The X goes to the right, Y goes up, and Z goes away, just as for clip
+                space. The only difference is that the bounds for these coordinates depends on the
+                viewable window. It should also be noted that while these are in window coordinates,
+                none of the precision is lost. These are not integer coordinates; they are still
+                floating-point values, and thus they have precision beyond that of a single
+                pixel.</para>
+            <para>The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest.
+                Vertex positions outside of this range are not visible.</para>
+            <para>Note that window coordinates have the bottom-left position as the (0, 0) origin
+                point. This is counter to what users are used to in window coordinates, which is
+                having the top-left position be the origin. There are transform tricks you can play
+                to allow you to work in a top-left coordinate space.</para>
+            <para>The full details of this process will be discussed at length as the tutorials
+                progress.</para>
+            <formalpara>
+                <title>Scan Conversion</title>
+                <para>After converting the coordinates of a triangle to window coordinates, the
+                    triangle undergoes a process called <glossterm>scan conversion.</glossterm> This
+                    process takes the triangle and breaks it up based on the arrangement of window
+                    pixels over the output image that the triangle covers.</para>
+            </formalpara>
+            <!--TODO: Show a series of images, starting with a triangle, then overlying it with a pixel grid, followed by one showing
+which pixels get filled in.-->
+            <para>The specifics of which pixels get used and which do not for a triangle is not
+                important. What matters more than that is the fact that if two triangles are
+                perfectly adjacent, such that they share the same input vertex positions, the output
+                rasterization will never have holes or double coverage. Along the shared line, there
+                will be no overlap or holes between the two triangles.</para>
+            <para>The result of scan converting a triangle is a sequence of boxes along the area of
+                the triangle. These boxes are called <glossterm>fragments.</glossterm></para>
+            <para>Each fragment has certain data associated with it. This data contains the 2D
+                location of the fragment in window coordinates, and the Z value of the fragment.
+                This Z value is known as the depth of the fragment. There may be other information
+                that is part of a fragment, and we will expand on that in later tutorials.</para>
+            <formalpara>
+                <title>Fragment Processing</title>
+                <para>This phase takes a fragment from a scan converted triangle and transforms it
+                    into one or more color values and a single depth value. The order that fragments
+                    from a single triangle are processed in is irrelevant; since a single triangle
+                    lies in a single plane, fragments generated from it cannot possible overlap.
+                    However, the fragments from another triangle can possibly overlap. Since order
+                    is important in a rasterizer, the fragments from one triangle must be processed
+                    before the fragments from another triangle.</para>
+            </formalpara>
+            <para>This phase is quite arbitrary. The user of OpenGL has a lot of options of how to
+                decide what color to assign a fragment. We will cover this step in detail throughout
+                the tutorials.</para>
+            <note>
+                <title>Direct3D Note</title>
+                <para>Direct3D prefers to call this stage <quote>pixel processing</quote> or
+                        <quote>pixel shading</quote>. This is a misnomer, because as we will see in
+                    tutorials on antialiasing, multiple fragments from a single triangle can be
+                    combined together to form an output pixel. Also, the fragment has not been
+                    written to the image as of yet. Indeed, this step can conditionally prevent
+                    rendering of a fragment based on arbitrary computations. Thus a
+                        <quote>pixel</quote> in D3D parlance may never actually become a pixel at
+                    all.</para>
+            </note>
+            <formalpara>
+                <title>Fragment Writing</title>
+                <para>After generating one or more colors and a depth value, the fragment is written
+                    to the destination image. This step involves more than simply writing to the
+                    destination image. Combining the color and depth with the colors that are
+                    currently in the image can involve a number of computations. These will be
+                    covered in detail in various tutorials.</para>
+            </formalpara>
+        </section>
+        <section>
+            <title>Colors</title>
+            <para>Previously, a pixel was stated to be an element in a 2D image that has a
+                particular color. A color can be described in many ways.</para>
+            <para>In computer graphics, the usual description of a color is as a series of numbers
+                on the range [0, 1]. Each of the numbers corresponds to the intensity of a
+                particular reference color; thus the final color represented by the series of
+                numbers is a mix of these reference colors.</para>
+            <para>The set of reference colors is called a <glossterm>colorspace</glossterm>. The
+                most common color space a screen is RGB, where the reference colors are Red, Green
+                and Blue. Printed works tend to use CMYK (Cyan, Magenta, Yellow, Black). Since we're
+                dealing with rendering to a screen, and because OpenGL requires it, we will use the
+                RGB colorspace.</para>
+            <para>So a pixel in OpenGL is defined as 3 values on the range [0, 1] that represent a
+                color in the RGB colorspace. This will get extended slightly, as we deal with
+                transparency later.</para>
+        </section>
+        <section>
+            <title>Shader</title>
+            <para>A <glossterm>shader</glossterm> is a program designed to be run on a renderer as
+                part of the rendering operation. Regardless of the kind of rendering system in use,
+                shaders can only be executed at certain points in that rendering process. These
+                    <glossterm>shader stages</glossterm> represent hooks where a user can add
+                arbitrary algorithms to create a specific visual effect.</para>
+            <para>In term of rasterization as outlined above, there are several shader stages where
+                arbitrary processing is both economical for performance and offers high utility to
+                the user. For example, the transformation of an incoming vertex to clip space is a
+                useful hook for user-defined code, as is the processing of a fragment into final
+                colors and depth.</para>
+            <para>Shaders for OpenGL are run on the actual rendering hardware. This can often free
+                up valuable CPU time for other tasks, or simply perform operations that would be
+                difficult if not impossible without the flexibility of executing arbitrary code. A
+                downside of this is that they must live within certain limits, some of them quite
+                il-defined, that CPU code does not have to.</para>
+            <para>There are a number of shading languages available to various APIs. The one used in
+                this tutorial is the primary shading language of OpenGL. It is called,
+                unimaginatively, the OpenGL Shading Language, or <acronym>GLSL</acronym>. for short.
+                It looks deceptively like C, but it is very much <emphasis>not</emphasis> C.</para>
+        </section>
+    </section>
+    <section>
+        <title>What is OpenGL</title>
+        <para>Before we can begin looking into writing an OpenGL application, we must first know
+            what it is that we are writing. What exactly is OpenGL?</para>
+        <section>
+            <title>OpenGL as an API</title>
+            <para>OpenGL is usually looked at as an Application Programming Interface
+                    (<acronym>API</acronym>). The OpenGL API has been exposed to a number of
+                languages. But the one that they all ultimately use at their lowest level is the C
+                API.</para>
+            <para>The API, in C, is defined by a number of typedefs, #defined enumerator values, and
+                functions. The typedefs define basic GL types like <type>GLint</type>,
+                    <type>GLfloat</type> and so forth.</para>
+            <para>Complex aggregates like structs are never directly exposed in OpenGL. Any such
+                constructs are hidden behind the API. This makes it easier to expose the OpenGL API
+                to non-C languages without having a complex conversion layer.</para>
+            <para>In C++, if you wanted an object that contained an integer, a float, and a string,
+                you would create it and access it like this:</para>
+            <programlisting>struct Object
+{
+    int anInteger;
+    float aFloat;
+    char *aString;
+};
+
+//Create the storage for the object.
+Object newObject;
+
+//Put data into the object.
+newObject.anInteger = 5;
+newObject.aFloat = 0.4f;
+newObject.aString = "Some String";
+</programlisting>
+            <para>In OpenGL, you would use an API that looks more like this:</para>
+            <programlisting>//Create the storage for the object
+GLuint objectName;
+glGenObject(1, &amp;objectName);
+
+//Put data into the object.
+glBindObject(GL_MODIFY, objectName);
+glObjectParameteri(GL_MODIFY, GL_OBJECT_AN_INTEGER, 5);
+glObjectParameterf(GL_MODIFY, GL_OBJECT_A_FLOAT, 0.4f);
+glObjectParameters(GL_MODIFY, GL_OBJECT_A_STRING, "Some String");</programlisting>
+            <para>None of these are actual OpenGL commands, of course. This is simply an example of
+                what the interface to such an object would look like.</para>
+            <para>OpenGL owns the storage for all OpenGL objects. Because of this, the user can only
+                access an object by reference. Almost all OpenGL objects are referred to by an
+                unsigned integer (the <type>GLuint</type>). Objects are created by a function of the
+                form <function>glGen*</function>, where * is the type of the object. The first
+                parameter is the number of objects to create, and the second is a
+                    <type>GLuint*</type> array that receives the newly created object names.</para>
+            <para>To modify most objects, they must first be bound to the context. Many objects can
+                be bound to different locations in the context; this allows the same object to be
+                used in different ways. These different locations are called
+                    <glossterm>targets</glossterm>; all objects have a list of valid targets, and
+                some have only one. In the above example, the fictitious target
+                    <quote>GL_MODIFY</quote> is the location where <varname>objectName</varname> is
+                bound.</para>
+            <para>The functions that actually change values within the object are given a target
+                parameter, so that they could modify objects bound to different targets.</para>
+            <para>Note that all OpenGL objects are not as simple as this example, and the functions
+                that change object state do not all follow these naming conventions. Also, exactly
+                what it means to bind an object to the context is explained below.</para>
+        </section>
+        <section>
+            <title>The Structure of OpenGL</title>
+            <para>The OpenGL API is defined as a state machine. Almost all of the OpenGL functions
+                set or retrieve some state in OpenGL. The only functions that do not change state
+                are functions that use the currently set state to cause rendering to happen.</para>
+            <para>You can think of the state machine as a very large struct with a great many
+                different fields. This struct is called the OpenGL <glossterm>context</glossterm>,
+                and each field in the context represents some information necessary for
+                rendering.</para>
+            <para>Objects in OpenGL are thus defined as a list of fields in this struct that can be
+                saved and restored. <glossterm>Binding</glossterm> an object to a target within the
+                context causes the data in this object to replace some of the context's state. Thus
+                after the binding, future function calls that read from or modify this context state
+                will read or modify the state within the object.</para>
+            <para>Objects are usually represented as <type>GLuint</type> integers; these are handles
+                to the actual OpenGL objects. The integer value 0 is special; it acts as the object
+                equivalent of a NULL pointer. Binding object 0 means to unbind the currently bound
+                object. This means that the original context state, the state that was in place
+                before the binding took place, now becomes the context state.</para>
+            <para>Due note that this is simply a model of OpenGL's <emphasis>behavior.</emphasis>
+                This is most certainly <emphasis>not</emphasis> how it is actually
+                implemented.</para>
+        </section>
+        <section>
+            <title>The OpenGL Specification</title>
+            <para>To be technical about it, OpenGL is not an API; it is a specification. A document.
+                The C API is merely one way to implement the spec. The specification defines the
+                initial OpenGL state, what each function does, and what is supposed to happen when
+                you call a rendering function.</para>
+            <para>The specification is written by the OpenGL <glossterm>Architectural Review
+                    Board</glossterm> (<acronym>ARB</acronym>), a group of representatives from
+                companies like Apple, NVIDIA, and AMD (the ATi part), among others. The ARB is part
+                of the <link xlink:href="http://www.khronos.org/">Khronos Group</link>.</para>
+            <para>The specification is a very complicated and technical document. I do not suggest
+                that the novice graphics programmer read it. If you do however, the most important
+                thing to understand about it is this: it describes <emphasis>results</emphasis>, not
+                implementation. For example, the spec says that clipping of triangles happens before
+                transforming them from clip-space to normalized device coordinate space. Hardware
+                almost certainly does clipping in normalized device coordinate space, simply because
+                all the vertices are in the same space. It doesn't matter to the results, so it is
+                still a valid OpenGL implementation.</para>
+        </section>
+    </section>
+    <glossary>
+        <title>Glossary</title>
+        <glossentry>
+            <glossterm>pixel</glossterm>
+            <glossdef>
+                <para>The smallest division of a digital image. A pixel has a particular color in a
+                    particular colorspace.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>image</glossterm>
+            <glossdef>
+                <para>A two-dimensional array of pixels.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>colorspace</glossterm>
+            <glossdef>
+                <para>The set of reference colors that define a way of representing a color in
+                    computer graphics. All colors are defined relative to a particular
+                    colorspace.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>rendering</glossterm>
+            <glossdef>
+                <para>The process of taking the source 3D world and converting it into a 2D image
+                    that represents a view of that world from a particular angle.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>rasterization</glossterm>
+            <glossdef>
+                <para>A particular rendering method, used to convert a series of 3D triangles into a
+                    2D image.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>geometry, model, mesh</glossterm>
+            <glossdef>
+                <para>A single object in 3D space made of triangles.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>vertex</glossterm>
+            <glossdef>
+                <para>One of the 3 elements that make up a triangle. Vertices can contain arbitrary
+                    of data, but among that data is a 3-dimensional position representing the
+                    location of the vertex in 3D space.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>clip space, clip coordinates</glossterm>
+            <glossdef>
+                <para>A region of three-dimensional space into which vertex positions are
+                    transformed. These vertex positions are 4 dimensional quantities. The fourth
+                    component (W) of clip coordinates represents the visible range of clip space for
+                    that vertex. So the X, Y, and Z component of clip coordinates must be between
+                    [-W, W] to be a visible part of the world.</para>
+                <para>In clip space, positive X goes right, positive Y up, and positive Z
+                    away.</para>
+                <para>Clip-space vertices are output by the vertex processing stage of the rendering
+                    pipeline.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>normalized device coordinates</glossterm>
+            <glossdef>
+                <para>These are clip coordinates that have been divided by their fourth component.
+                    This makes this range of space the same for all components. Vertices with
+                    positions on the range [-1, 1] are visible, and other vertices are not.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>window space, window coordinates</glossterm>
+            <glossdef>
+                <para>A region of three-dimensional space that normalized device coordinates are
+                    mapped to. The X and Y positions of vertices in this space are relative to the
+                    destination image. The origin is in the bottom-left, with positive X going right
+                    and positive Y going up. The Z value is a number on the range [0, 1], where 0 is
+                    the closest value and 1 is the farthest. Vertex positions outside of this range
+                    are not visible.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>scan conversion</glossterm>
+            <glossdef>
+                <para>The process of taking a triangle in window space and converting it into a
+                    number of fragments based on projecting it onto the pixels of the output
+                    image.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>fragment</glossterm>
+            <glossdef>
+                <para>A single element of a scan converted triangle. A fragment can contain
+                    arbitrary data, but among that data is a 3-dimensional position, identifying the
+                    location on the triangle in window space where this fragment originates
+                    from.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>shader</glossterm>
+            <glossdef>
+                <para>A program designed to be executed by a renderer, in order to perform some
+                    user-defined operations.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>shader stage</glossterm>
+            <glossdef>
+                <para>A particular place in a rendering pipeline where a shader can be executed to
+                    perform a computation.</para>
+            </glossdef>
+        </glossentry>
+    </glossary>
+</chapter>

Documents/Basics/Tutorial 01.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+    <title>Hello, Triangle!</title>
+    <para>It is traditional for tutorials and introductory books on programming languages start with
+        a program called <quote>Hello, World!</quote> This program is the simplest code necessary to
+        print the text <quote>Hello, World!</quote> It serves as a good test to see that one's build
+        system is functioning and that one can compile and execute code.</para>
+    <para>Using OpenGL to write actual text is rather involved. In lieu of text, our first tutorial
+        will be drawing a single triangle to the screen.</para>
+    <section>
+        <title>Framework and FreeGLUT</title>
+        <para>The source to this tutorial, found in <filename>Tut1 Hello
+                Triangle/tut1.cpp</filename>, is fairly simple. The project file that builds the
+            final executable actually uses two source files: the tutorial file and a common
+            framework file found in <filename>framework/framework.cpp</filename>. The framework file
+            is where the actual initialization of FreeGLUT is done; it is also where main is. This
+            file simply uses functions defined in the main tutorial file.</para>
+        <para>FreeGLUT is a fairly simple OpenGL initialization system. It creates and manages a
+            single window; all OpenGL commands refer to this window. Because windows in various GUI
+            systems need to have certain book-keeping done, how the user interfaces with this is
+            rigidly controlled.</para>
+        <para>The framework file expects 4 functions to be defined: <function>init</function>,
+                <function>display</function>, <function>reshape</function>, and
+                <function>keyboard</function>. The <function>init</function> function is called
+            after OpenGL is initialized. This gives the tutorial file the opportunity to load what
+            it needs into OpenGL before actual rendering takes place. The
+                <function>reshape</function> function is called by FreeGLUT whenever the window is
+            resized. This allows the tutorial to make whatever OpenGL calls are necessary to keep
+            the window's size in sync with OpenGL. The <function>keyboard</function> function is
+            called by FreeGLUT whenever the user presses a key. This gives the tutorial the chance
+            to process some basic user input.</para>
+        <para>The <function>display</function> function is where the most important work happens.
+            FreeGLUT will call this function when it detects that the screen needs to be rendered
+            to.</para>
+    </section>
+    <section>
+        <title>Dissecting Display</title>
+        <para>The <function>display</function> function seems on the surface to be fairly simple.
+            However, the functioning of it is fairly complicated and intertwined with the
+            initialization done in the <function>init</function> function.</para>
+        <example>
+            <title>The <function>display</function> Function</title>
+            <programlisting>glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
+glClear(GL_COLOR_BUFFER_BIT);
+
+glUseProgram(theProgram);
+
+glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+glEnableVertexAttribArray(positionAttrib);
+glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, 0);
+
+glDrawArrays(GL_TRIANGLES, 0, 3);
+
+glDisableVertexAttribArray(positionAttrib);
+glUseProgram(0);
+
+glutSwapBuffers();</programlisting>
+        </example>
+        <para>Let us examine this code in detail.</para>
+        <para>The first two lines clear the screen. <function>glClearColor</function> is one of
+            those state setting functions; it sets the color to use when clearing the screen. It
+            sets the clearing color to black. <function>glClear</function> does not set OpenGL
+            state; it causes the screen to be cleared. The <literal>GL_COLOR_BUFFER_BIT</literal>
+            parameter means that the clear call will affect the color buffer, causing it to be
+            cleared to the current clearing color.</para>
+        <para>The next line sets the current shader program to be used by all subsequent rendering
+            commands. We will go into detail as to how this works later.</para>
+        <para>The next three commands all set state. These command set up the coordinates of the
+            triangle to be rendered. They tell OpenGL the location in memory that the positions of
+            the triangle will come from. The specifics of how these work will be detailed
+            later.</para>
+        <para>The <function>glDrawArrays</function> function is, as the name suggests, a rendering
+            function. It uses the current state to generate a stream of vertices that will form
+            triangles.</para>
+        <para>The next two lines are simply cleanup work, undoing some of the setup that was done
+            for the purposes of rendering.</para>
+        <para>The last line, <function>glutSwapBuffers</function>, is a FreeGLUT command, not an
+            OpenGL command. The OpenGL framebuffer, as we set up in
+                <filename>framework.cpp</filename>, is double-buffered. This means that the image
+            that are currently being shown to the user is <emphasis>not</emphasis> the same image we
+            are rendering to. Thus, all of our rendering is hidden from view until it is shown to
+            the user. This way, the user never sees a half-rendered image.
+                <function>glutSwapBuffers</function> is the function that causes the image we are
+            rendering to be displayed to the user.</para>
+    </section>
+    <section>
+        <title>Following the Data</title>
+        <para>In the <link linkend="tut_00">basic background section</link>, we described the
+            functioning of the OpenGL pipeline. We will now revisit this pipeline in the context of
+            the code in tutorial 1. This will give us an understanding about the specifics of how
+            OpenGL goes about rendering data.</para>
+        <section>
+            <title>Vertex Transfer</title>
+            <para>The first stage in the rasterization pipeline is transforming vertices to clip
+                space. Before OpenGL can do this however, it must receive a list of vertices. So the
+                very first stage of the pipeline is sending triangle data to OpenGL.</para>
+            <para>This is the data that we wish to transfer:</para>
+            <programlisting>const float vertexPositions[] = {
+    0.75f, 0.75f, 0.0f, 1.0f,
+    0.75f, -0.75f, 0.0f, 1.0f,
+    -0.75f, -0.75f, 0.0f, 1.0f,
+};</programlisting>
+            <para>Each line of 4 values represents a 4D position of a vertex. These are four
+                dimensional because, as you may recall, clip-space is 4D as well. These vertex
+                positions are already in clip space. What we want OpenGL to do is render a triangle
+                based on this vertex data. Since every 4 floats represents a vertex's position, we
+                have 3 vertices: the minimum number for a triangle.</para>
+            <para>Even though we have this data, OpenGL cannot use it directly. OpenGL has some
+                limitations on what memory it can read from. You can allocate vertex data all you
+                want yourself; OpenGL cannot directly see any of your memory. Therefore, the first
+                step is to allocate some memory that OpenGL <emphasis>can</emphasis> see, and fill
+                that memory with our data. This is done with something called a <glossterm>buffer
+                    object.</glossterm></para>
+            <para>A buffer object is a linear array of memory allocated by OpenGL at the behest of
+                the user. This memory is controlled by the user, but the user has only indirect
+                control over it. Think of a buffer object as an array of GPU memory. The GPU can
+                read this memory quickly, so storing data in it has performance advantages.</para>
+            <para>The buffer object in the tutorial was created during initialization. Here is the
+                code responsible for creating the buffer object:</para>
+            <example>
+                <title>Buffer Object Initialization</title>
+                <programlisting>void InitializeVertexBuffer()
+{
+    glGenBuffers(1, &amp;positionBufferObject);
+    
+    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+    glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);
+    glBindBuffer(GL_ARRAY_BUFFER, 0);
+}</programlisting>
+            </example>
+            <para>The first line creates the buffer object, storing the handle to the object in the
+                global variable <varname>positionBufferObject</varname>. Though the object now
+                exists, it doesn't own any memory yet. That is because we have not allocated any
+                with this object.</para>
+            <para>The <function>glBindBuffer</function> function makes the buffer object the
+                currently bound buffer to the <literal>GL_ARRAY_BUFFER</literal> binding target. As
+                mentioned in Tutorial 0, objects in OpenGL usually have to be bound to the context
+                in order for them to do anything, and buffer objects are no exception.</para>
+            <para>The <function>glBufferData</function> function allocates memory for the buffer
+                currently bound to <literal>GL_ARRAY_BUFFER</literal>, which is the one we just
+                created. We already have some vertex data; the problem is that it is in our memory
+                rather than OpenGL's memory. This function allocates enough GPU memory to store our
+                vertex data. The third parameter is a pointer to the data to initialize the buffer
+                with; we give it our vertex data. The fourth parameter is something we will look at
+                in future tutorials.</para>
+            <para>The second bind buffer call is simply cleanup. By binding the buffer object 0 to
+                    <literal>GL_ARRAY_BUFFER</literal>, we cause the buffer object previously bound
+                to that target to become unbound from it. This was not strictly necessary, as any
+                later binds to this target will simply unbind what is already there. But unless you
+                have very strict control over your rendering, it is usually a good idea to unbind
+                the objects you bind.</para>
+            <para>This is all just to get the vertex data in the GPU's memory. But buffer objects
+                are not formatted; as far as OpenGL is concerned, all we did was fill a buffer
+                object with random binary data. We now need to do something that tells OpenGL that
+                there is vertex data in this buffer object.</para>
+            <para>We do this in the rendering code. That is the purpose of these lines:</para>
+            <programlisting>glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+glEnableVertexAttribArray(positionAttrib);
+glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, 0);</programlisting>
+            <para>The first function we have seen before. It simply says that we are going to use
+                this buffer object.</para>
+            <para>The second function, <function>glEnableVertexAttribArray</function> is something
+                we will explain in the next section, when we talk about where
+                    <varname>positionAttrib</varname> comes from. Suffice it to say that this
+                function tells OpenGL that the vertex data called <varname>positionAttrib</varname>
+                will be provided in rendering calls. Without this function, the next one is
+                unimportant.</para>
+            <para>The third function is the real key. <function>glVertexAttribPointer</function>,
+                despite having the word <quote>Pointer</quote> in it, does not deal with pointers.
+                Instead, it deals with buffer objects.</para>
+            <para>This function tells OpenGL where a particular piece of vertex data is coming from.
+                The buffer that is bound to GL_ARRAY_BUFFER at the time that this function is called
+                is the buffer object that will be associated with this piece of data.</para>
+            <para>What this particular function call is saying is this. <quote>The piece of vertex
+                    data called <varname>positionAttrib</varname> comes from the buffer object
+                        <varname>positionBufferObject</varname>. This piece of vertex data contains
+                    32-bit float values, and each piece is a sequence of 4 of them. The data starts
+                    at the 0th byte of the buffer object, and each set of 4 32-bit floats is tightly
+                    packed together.</quote> This means that our data of 24 floats represents enough
+                information for the 3 vertices of a single triangle; this is exactly what we
+                want.</para>
+            <para>The specifics of this function call will be discussed in later tutorials.</para>
+            <para>Once OpenGL knows where to get its vertex data from, it can now use that vertex
+                data to render.</para>
+            <programlisting>glDrawArrays(GL_TRIANGLES, 0, 3);</programlisting>
+            <para>This function seems very simple on the surface, but it does a great deal. The
+                second and third parameters represent the start index and the number of indices to
+                read from our vertex data. We start at the 0th index, and read 3 vertices from it.
+                The first parameter tells OpenGL that it is to take every 3 vertices that it gets as
+                an independent triangle. Thus, it will read 3 vertices and connect them to form a
+                triangle.</para>
+            <para>Again, we will go into details in another tutorial.</para>
+        </section>
+        <section>
+            <title>Vertex Processing and Shaders</title>
+            <para>Now that we can tell OpenGL what the vertex data is, we come to the next stage of
+                the pipeline: vertex processing. This is one of two programmable stages that we will
+                cover in this tutorial, so this involve the use of a
+                <glossterm>shader.</glossterm></para>
+            <para>All a shader is is a program that runs on the GPU. There are several possible
+                shader stages in the pipeline, and each has its own inputs and outputs. The purpose
+                of a shader is to take its inputs, as well as potentially various other data, and
+                convert them into a set of outputs.</para>
+            <para>Each shader is executed over a set of inputs. It is important to note that a
+                shader, of any stage, operates <emphasis>completely independently</emphasis> of any
+                other shader of that stage. There can be no crosstalk between separate executions of
+                a shader. Execution for each set of inputs starts from the beginning of the shader
+                and continues to the end. A shader defines what its inputs and outputs are, and it
+                is illegal for a shader to complete without writing to all of its outputs.</para>
+            <para>Vertex shaders, as the name implies, operate on vertices. Specifically, each
+                invocation of a vertex shader operates on a <emphasis>single</emphasis> vertex.
+                These shaders must output, among any other user-defined outputs, a clip-space
+                position for that vertex. Where this comes from is up to the shader.</para>
+            <para>Shaders in OpenGL are written in the OpenGL Shading Language
+                    (<acronym>GLSL</acronym>). This language looks suspiciously like C, but it is
+                very much not C. It has far too many limitations to be C (for example, recursion is
+                forbidden). This is what our simple vertex shader looks like:</para>
+            <example>
+                <title>Vertex Shader</title>
+                <programlisting>#version 150
+
+in vec4 position;
+void main()
+{
+    gl_Position = position;
+}</programlisting>
+            </example>
+            <para>This looks fairly simple. The first line states that the version of GLSL used by
+                this shader is version 1.50. A version declaration is required for all GLSL
+                shaders.</para>
+            <para>The next line defines an input to the vertex shader. The input is called
+                    <varname>position</varname> and is of type <type>vec4</type>: a 4-dimensional
+                vector of floating-point values.</para>
+            <para>As with C, a shader's execution starts with the <function>main</function>
+                function. This shader is very simple, copying the input <varname>position</varname>
+                into something called <varname>gl_Position</varname>. This is a variable that is
+                    <emphasis>not</emphasis> defined in the shader; that is because it is a standard
+                variable defined by every vertex shader. If you see an identifier in GLSL that
+                starts with <quote>gl_,</quote> then it must be a built-in identifier.</para>
+            <para><varname>gl_Position</varname> is defined as:</para>
+            <programlisting>out vec4 gl_Position;</programlisting>
+            <para>Recall that the minimum a vertex shader must do is generate a clip-space position
+                for the vertex. That is what <varname>gl_Position</varname> is: it is the output
+                that represents a clip-space position.</para>
+            <formalpara>
+                <title>Vertex Attributes</title>
+                <para>Inputs to and outputs from a shader stage come from somewhere and go to
+                    somewhere. Thus, the input <varname>position</varname> must be filled in with
+                    data somewhere. So where does that data come from? Inputs to a vertex shader are
+                    called <glossterm>vertex attributes</glossterm>.</para>
+            </formalpara>
+            <para>You might recognize something similar to the term <quote>vertex attribute.</quote>
+                For example, <quote>glEnable<emphasis>VertexAttrib</emphasis>Array</quote> or
+                        <quote>gl<emphasis>VertexAttrib</emphasis>Pointer.</quote></para>
+            <para>This is how data flows down the pipeline in OpenGL. When rendering starts, vertex
+                data in a buffer object is read based on setup work done by
+                    <varname>glVertexAttribPointer</varname>. This function describes where the data
+                for an attribute comes from. The connection between a particular call to
+                    <function>glVertexAttribPointer</function> and the string name of an input value
+                to a vertex shader is somewhat complicated. This is where that mysterious variable,
+                    <varname>positionAttrib,</varname> comes into play.</para>
+            <para>The details of compiling a shader will be gone over a bit later, but the
+                connection is made with this call:</para>
+            <programlisting>positionAttrib = glGetAttribLocation(theProgram, "position");</programlisting>
+            <para>The variable <varname>theProgram</varname> represents the vertex shader (and the
+                fragment shader, but that's for later). The function
+                    <function>glGetAttribLocation</function> takes the given string that specifies a
+                vertex input, and it returns a number that represents that particular input. This
+                number is then used in subsequent <function>glVertexAttribPointer</function> calls
+                and the like to represent the attribute for <quote>position.</quote></para>
+        </section>
+        <section>
+            <title>Rasterization</title>
+            <para>All that has happened thus far is that 3 vertices have been given to OpenGL and it
+                has transformed them with a vertex shader into 3 positions in clip-space. Next, the
+                vertex positions are transformed into normalized-device coordinates by dividing the
+                3 XYZ components of the position by the W component. In our case, W is always 1.0,
+                so the positions are already effectively in normalized-device coordinates.</para>
+            <para>After this, the vertex positions are transformed into window coordinates. This is
+                done with something called the <glossterm>viewport transform</glossterm>. This is so
+                named because of the function used to set it up, <function>glViewport</function>.
+                The tutorial calls this function every time the window's size changes. Remember that
+                the framework calls <function>reshape</function> whenever the window's size changes.
+                So the tutorial's implementation of reshape is this:</para>
+            <example>
+                <title>Reshaping Window</title>
+                <programlisting>void reshape (int w, int h)
+{
+    glViewport(0, 0, (GLsizei) w, (GLsizei) h);
+}</programlisting>
+            </example>
+            <para>This tells OpenGL what area of the available area we are rendering to. In this
+                case, we change it to match the full available area; without this function, resizing
+                the window would have no effect on the rendering. Also, make note of the fact that
+                we make no effort to keep the aspect ratio constant.</para>
+            <para>Recall that window coordinates are in a lower-left coordinate system. So the point
+                (0, 0) is the bottom left of the window. This function takes the bottom left
+                position as the first two coordinates, and the width and height of the viewport
+                rectangle as the other two coordinates.</para>
+            <para>Once in window coordinates, OpenGL can now take these 3 vertices and scan-convert
+                it into a series of fragments. In order to do this however, OpenGL must decide what
+                the list of vertices represents.</para>
+            <para>OpenGL can interpret a list of vertices in a variety of different ways. The way
+                OpenGL interprets vertex lists is given by the draw command:</para>
+            <programlisting>glDrawArrays(GL_TRIANGLES, 0, 3);</programlisting>
+            <para>The enum <literal>GL_TRIANGLES</literal> tells OpenGL that every 3 vertices of the
+                list should be taken to be a triangle. Since we passed only 3 vertices, we get 1
+                triangle.</para>
+        </section>
+        <section>
+            <title>Fragment Processing</title>
+            <para>A fragment shader is used to compute the output color(s) of a fragment. The inputs
+                of a fragment shader include the window-space XYZ position of the fragment. It can
+                also include user-defined data, but we will get to that.</para>
+            <para>Our fragment shader looks like this:</para>
+            <example>
+                <title>Fragment Shader</title>
+                <programlisting>#version 150
+
+out vec4 outputColor;
+void main()
+{
+   outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
+}</programlisting>
+            </example>
+            <para>As with the vertex shader, the first line states that the shader uses GLSL version
+                1.50.</para>
+            <para>The next line specifies an output for the fragment shader. The output variable is
+                of type <type>vec4</type>.</para>
+            <para>The main function simply sets the output color to a 4-dimensional vector, with all
+                of the components as 1.0f. This sets the Red, Green, and Blue components of the
+                color to full intensity, which is 1.0; this creates the white color of the triangle.
+                The fourth component is something we will see in later tutorials.</para>
+            <para>This fragment shader does not do anything with the position input.</para>
+            <para>After the fragment shader executes, the fragment output color is written to the
+                output image.</para>
+        </section>
+    </section>
+    <section>
+        <title>Making Shaders</title>
+        <para>We glossed over exactly how these text strings called shaders actually get used. We
+            will go into some detail on that now.</para>
+        <note>
+            <para>If you are familiar with how shaders work in other APIs, that will not help you
+                here. OpenGL shaders work very differently from the way they work in other
+                APIs.</para>
+        </note>
+        <para>Shaders are written in a C-like language. So OpenGL uses a very C-like compilation
+            model. In C, each individual .c file is compiled into an object file. Then, one or more
+            object files are linked together into a single program (or static/shared library).
+            OpenGL does something very similar.</para>
+        <para>A shader string is compiled into a <glossterm>shader object</glossterm>; this is
+            analogous to an object file. One or more shader objects is linked into a
+                <glossterm>program object</glossterm>.</para>
+        <para>A program object in OpenGL contains code for <emphasis>all</emphasis> of the shaders
+            to be used for rendering. In the tutorial, we have a vertex and a fragment shader; both
+            of these are linked together into a single program object. Building that program object
+            is the responsibility of this code:</para>
+        <example>
+            <title>Program Initialization</title>
+            <programlisting>void InitializeProgram()
+{
+    std::vector&lt;GLuint> shaderList;
+    
+    shaderList.push_back(CreateShader(GL_VERTEX_SHADER, strVertexShader));
+    shaderList.push_back(CreateShader(GL_FRAGMENT_SHADER, strFragmentShader));
+    
+    theProgram = CreateProgram(shaderList);
+    
+    positionAttrib = glGetAttribLocation(theProgram, "position");
+}</programlisting>
+        </example>
+        <para>The first statement simply creates a list of the shader objects we intend to link
+            together. The next two statements compile our two shader strings. The
+                <function>CreateShader</function> function is a utility function defined by the
+            tutorial that compiles a shader.</para>
+        <para>Compiling a shader into a shader object is a lot like compiling source code. Most
+            important of all, it involves error checking. This is the implementation of
+                <function>CreateShader</function>:</para>
+        <example>
+            <title>Shader Creation</title>
+            <programlisting>GLuint CreateShader(GLenum eShaderType, const std::string &amp;strShaderFile)
+{
+    GLuint shader = glCreateShader(eShaderType);
+    const char *strFileData = strShaderFile.c_str();
+    glShaderSource(shader, 1, &amp;strFileData, NULL);
+    
+    glCompileShader(shader);
+    
+    GLint status;
+    glGetShaderiv(shader, GL_COMPILE_STATUS, &amp;status);
+    if (status == GL_FALSE)
+    {
+        GLint infoLogLength;
+        glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &amp;infoLogLength);
+        
+        GLchar *strInfoLog = new GLchar[infoLogLength + 1];
+        glGetShaderInfoLog(shader, infoLogLength, NULL, strInfoLog);
+        
+        const char *strShaderType = NULL;
+        switch(eShaderType)
+        {
+        case GL_VERTEX_SHADER: strShaderType = "vertex"; break;
+        case GL_GEOMETRY_SHADER: strShaderType = "geometry"; break;
+        case GL_FRAGMENT_SHADER: strShaderType = "fragment"; break;
+        }
+        
+        fprintf(stderr, "Compile failure in %s shader:\n%s\n", strShaderType, strInfoLog);
+        delete[] strInfoLog;
+    }
+
+	return shader;
+}</programlisting>
+        </example>
+        <para>An OpenGL shader object is, as the name suggests, an object. So the first step is to
+            create the object with <function>glCreateShader</function>. This function creates a
+            shader of a particular type (vertex or fragment), so it takes a parameter that tells
+            what kind of object it creates. Since each state has certain syntax rules and
+            pre-defined variables and constants, the </para>
+        <note>
+            <para>Shader and program objects are objects in OpenGL. But they were rather differently
+                from other kinds of OpenGL objects. For example, creating buffer objects, as shown
+                above, uses a function of the form <quote>glGen*</quote> where * is
+                    <quote>Buffer</quote>. It takes a number of objects to create and a list to put
+                those object handles in.</para>
+            <para>There are many other differences between shader/program objects and other kinds of
+                OpenGL objects.</para>
+        </note>
+        <para>The next step is to actually compile the text shader into the object. The C-style
+            string is retrieved from the C++ <classname>std::string</classname> object, and it is
+            fed into the shader object with the <function>glShaderSource</function> function. The
+            first parameter is the shader object to put the string into. The next parameter is the
+            number of strings to put into the shader. Compiling multiple strings into a single
+            shader object works analogously to compiling header files in C files. Except of course
+            that the .c file explicitly lists the files it includes, while you must manually add
+            them with <function>glShaderSource</function>.</para>
+        <para>The next parameter is an array of const char* strings. The last parameter is normally
+            an array of lengths of the strings. We pass in <literal>NULL</literal>, which tells
+            OpenGL to assume that the string is null-terminated. In general, unless you need to use
+            the null character in a string, there is no need to use the last parameter.</para>
+        <para>Once the strings are in the object, they are compiled with
+                <function>glCompileShader</function>, which simply takes the shader object to
+            compile.</para>
+        <para>After compiling, we need to see if the compilation was successful. We do this by
+            calling <function>glGetShaderiv</function> to retrieve the
+                <literal>GL_COMPILE_STATUS</literal>. If this is GL_FALSE, then the shader failed to
+            compile; otherwise compiling was successful.</para>
+        <para>If compilation fails, we do some error reporting. It prints a message to stderr that
+            explains what failed to compile. It also prints an info log from OpenGL that describes
+            the error; this of this log as the compiler output from a regular C compilation.</para>
+        <para>After creating both shader objects, we then pass them on to the
+                <function>CreateProgram</function> function:</para>
+        <example>
+            <title>Program Creation</title>
+            <programlisting>GLuint CreateProgram(const std::vector&lt;GLuint> &amp;shaderList)
+{
+    GLuint program = glCreateProgram();
+    
+    for(size_t iLoop = 0; iLoop &lt; shaderList.size(); iLoop++)
+    	glAttachShader(program, shaderList[iLoop]);
+    
+    glLinkProgram(program);
+    
+    GLint status;
+    glGetProgramiv (program, GL_LINK_STATUS, &amp;status);
+    if (status == GL_FALSE)
+    {
+        GLint infoLogLength;
+        glGetProgramiv(program, GL_INFO_LOG_LENGTH, &amp;infoLogLength);
+        
+        GLchar *strInfoLog = new GLchar[infoLogLength + 1];
+        glGetProgramInfoLog(program, infoLogLength, NULL, strInfoLog);
+        fprintf(stderr, "Linker failure: %s\n", strInfoLog);
+        delete[] strInfoLog;
+    }
+    
+    return program;
+}</programlisting>
+        </example>
+        <para>This function is fairly simple. It first creates an empty program object with
+                <function>glCreateProgram</function>. This function takes no parameters; remember
+            that program objects are a combination of <emphasis>all</emphasis> shader stages.</para>
+        <para>Next, it attaches each of the previously created shader objects to the programs, by
+            calling the function <function>glAttachShader</function> in a loop over the
+                <classname>std::vector</classname> of shader objects. The program does not need to
+            be told what stage each shader object is for; the shader object itself remembers
+            this.</para>
+        <para>Once all of the shader objects are attached, the code links the program with
+                <function>glLinkProgram</function>. Similar to before, we must then fetch the
+            linking status by calling <function>glGetProgramiv</function> with
+                <literal>GL_LINK_STATUS</literal>. If it is GL_FALSE, then the linking failed and we
+            print the linking log. Otherwise, we return the created program.</para>
+        <formalpara>
+            <title>Vertex Attribute Indexes</title>
+            <para>The last line of <function>InitializeProgram</function> is the key to how
+                attributes are linked between the vertex array data and the vertex program's
+                input.</para>
+        </formalpara>
+        <programlisting>positionAttrib = glGetAttribLocation(theProgram, "position");</programlisting>
+        <para>The function <function>glGetAttribLocation</function> takes a successfully linked
+            program and a string naming one of the inputs of the vertex shader in that program. It
+            returns the attribute index of that input. Therefore, when we use this program, if we
+            want to send some vertex data to the <varname>position</varname> input variable, we
+            simply use the <varname>positionAttrib</varname> value we retrieved from
+                <function>glGetAttribLocation</function> in our call to
+                <function>glVertexAttribPointer</function>.</para>
+        <formalpara>
+            <title>Using Programs</title>
+            <para>To tell OpenGL that rendering commands should use a particular program object, the
+                    <function>glUseProgram</function> function is called. In the tutorial this is
+                called twice in the <function>display</function> function. It is called with the
+                global <varname>theProgram</varname>, which tells OpenGL that we want to use that
+                program for rendering until further notice. It is later called with 0, which tells
+                OpenGL that no programs will be used for rendering.</para>
+        </formalpara>
+        <note>
+            <para>For the purposes of these tutorials, using program objects when rendering is
+                    <emphasis>not</emphasis> optional. OpenGL does have, in its compatibility
+                profile, default rendering state that takes over when a program is not being used.
+                We will not be using this, and you are encouraged to avoid its use as well.</para>
+        </note>
+    </section>
+    <section>
+        <title>Cleanup</title>
+        <para>The tutorial allocates a lot of system resources. It allocates a buffer object, which
+            represents memory on the GPU. It creates two shader objects and a program object. But it
+            never explicitly deletes any of this.</para>
+        <para>Part of this is due to the nature of FreeGLUT, which does not provide hooks for a
+            cleanup function. But part of it is also due to the nature of OpenGL itself. In a simple
+            example such as this, there is no need to delete anything. OpenGL will clean up its own
+            assets when OpenGL is shut down as part of window deactivation.</para>
+        <para>It is generally good form to delete objects that you create before shutting down
+            OpenGL. And you certainly should do it if you encapsulate objects in C++ objects, such
+            that destructors will delete the OpenGL objects. But it isn't strictly necessary.</para>
+    </section>
+    <section>
+        <title>In Review</title>
+        <para>At this point, you have a good general overview of how things work in OpenGL. You know
+            how to compile and link shaders, how to pass some basic vertex data to OpenGL, and how
+            to render a triangle.</para>
+        <section>
+            <title>Further Study</title>
+            <para>Even with a simple tutorial like this, there are many things to play around with
+                and investigate.</para>
+            <itemizedlist>
+                <listitem>
+                    <para>Change the color value set by the fragment shader to different values. Use
+                        values in the range [0, 1], and see what happens when you go outside that
+                        range.</para>
+                </listitem>
+                <listitem>
+                    <para>Change the positions of the vertex data. Keep position values in the [-1,
+                        1] range, then see what happens when triangles go outside this range. Notice
+                        what happens when you change the Z value of the positions (note: nothing
+                        should happen). Keep W at 1.0 for now.</para>
+                </listitem>
+                <listitem>
+                    <para>Change the values that <function>reshape</function> gives to
+                            <function>glViewport</function>. Make them bigger or smaller than the
+                        window and see what happens. Shift them around to different quadrants within
+                        the window.</para>
+                </listitem>
+                <listitem>
+                    <para>Change the <function>reshape</function> function so that it respects
+                        aspect ratio. This means that the area rendered to may be smaller than the
+                        window area. Also, try to make it so that it always centers it within the
+                        window.</para>
+                </listitem>
+                <listitem>
+                    <para>Change the clear color, using values in the range [0, 1]. Notice how this
+                        interacts with changes to the viewport.</para>
+                </listitem>
+                <listitem>
+                    <para>Add another 3 vertices to the list, and change the number of vertices sent
+                        in the <function>glDrawArrays</function> call from 3 to 6. Add more and play
+                        with them.</para>
+                </listitem>
+            </itemizedlist>
+        </section>
+        <section>
+            <title>OpenGL Functions of Note</title>
+            <glosslist>
+                <glossentry>
+                    <glossterm>glClearColor, glClear</glossterm>
+                    <glossdef>
+                        <para>These functions clear the current viewable area of the screen.
+                                <function>glClearColor</function> sets the color to clear, while
+                                <function>glClear</function> with the
+                                <literal>GL_COLOR_BUFFER_BIT</literal> value causes the image to be
+                            cleared with that color.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glGenBuffers, glBindBuffer, glBufferData</glossterm>
+                    <glossdef>
+                        <para>These functions are used to create and manipulate buffer objects.
+                                <function>glGenBuffers</function> creates one or more buffers,
+                                <function>glBindBuffer</function> attaches it to a location in the
+                            context, and <function>glBufferData</function> allocates memory and
+                            fills this memory with data from the user into the buffer object.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glEnableVertexAttribArray, glDisableVertexAttribArray,
+                        glVertexAttribPointer</glossterm>
+                    <glossdef>
+                        <para>These functions control vertex attribute arrays.
+                                <function>glEnableVertexAttribArray</function> activates the given
+                            attribute index, <function>glDisableVertexAttribArray</function>
+                            deactivates the given attribute index, and
+                                <function>glVertexAttribPointer</function> defines the format and
+                            source location (buffer object) of the vertex data.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glDrawArrays</glossterm>
+                    <glossdef>
+                        <para>This function initiates rendering, using the currently active vertex
+                            attributes and the current program object (among other state). It causes
+                            a number of vertices to be pulled from the attribute arrays in
+                            order.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glViewport</glossterm>
+                    <glossdef>
+                        <para>This function defines the current viewport transform. It defines as a
+                            region of the window, specified by the bottom-left position and a
+                            width/height.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glCreateShader, glShaderSource, glCompileShader</glossterm>
+                    <glossdef>
+                        <para>These functions create a working shader object.
+                                <function>glCreateShader</function> simply creates an empty shader
+                            object of a particular shader stage. <function>glShaderSource</function>
+                            sets strings into that object; multiple calls to this function simply
+                            overwrite the previously set strings.
+                                <function>glCompileShader</function> causes the shader object to be
+                            compiled with the previously set strings.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glCreateProgram, glAttachShader, glLinkProgram</glossterm>
+                    <glossdef>
+                        <para>These functions create a working program object.
+                                <function>glCreateProgram</function> creates an empty program
+                            object. <function>glAttachShader</function> attaches a shader object to
+                            that program. Multiple calls attach multiple shader objects.
+                                <function>glLinkProgram</function> links all of the previously
+                            attached shaders into a complete program.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glUseProgram</glossterm>
+                    <glossdef>
+                        <para>This function causes the given program to become the current program.
+                            All rendering taking place after this call will use this program for the
+                            various shader stages. If the program 0 is given, then no program is
+                            current.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glGetAttribLocation</glossterm>
+                    <glossdef>
+                        <para>This function retrieves the attribute index of a named attribute. It
+                            takes the program to find the attribute in, and the name of the input
+                            variable of the vertex shader that the user is looking for the attribute
+                            index to.</para>
+                    </glossdef>
+                </glossentry>
+            </glosslist>
+        </section>
+    </section>
+    <glossary>
+        <title>Glossary</title>
+        <glossentry>
+            <glossterm>Buffer Object</glossterm>
+            <glossdef>
+                <para>An OpenGL object that represents a linear array of memory, containing
+                    arbitrary data. The contents of the buffer are defined by the user, but the
+                    memory is allocated by OpenGL. Data in buffer objects can be used for many
+                    purposes, including storing vertex data to be used when rendering.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>Vertex Attribute</glossterm>
+            <glossdef>
+                <para>A single input to a vertex shader. Each vertex attribute is a vector of up to
+                    4 elements in length. Vertex attributes are drawn from buffer objects; the
+                    connection between buffer object data and vertex inputs is made with the
+                        <function>glVertexAttribPointer</function> and
+                        <function>glEnableVertexAttribArray</function> functions. Each vertex
+                    attribute in a particular program object has an index; this index can be queried
+                    with <function>glGetAttribLocation</function>. The index is used by the various
+                    other vertex attribute functions to refer to that specific attribute.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>Viewport Transform</glossterm>
+            <glossdef>
+                <para>The process of transforming vertex data from normalized device coordinate
+                    space to window space. It specifies the viewable region of a window.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>Shader Object</glossterm>
+            <glossdef>
+                <para>An object in the OpenGL API that is used to compile shaders and represent the
+                    compiled shader's information. Each shader object is typed based on the shader
+                    stage that it contains data for.</para>
+            </glossdef>
+        </glossentry>
+        <glossentry>
+            <glossterm>Program Object</glossterm>
+            <glossdef>
+                <para>An object in the OpenGL API that represents the full sequence of all shader
+                    processing to be used when rendering. Program objects can be queried for
+                    attribute locations and various other information about the program. They also
+                    contain some state that will be seen in later tutorials.</para>
+            </glossdef>
+        </glossentry>
+    </glossary>
+</chapter>

Documents/Basics/Tutorial 02.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+    <title>Playing with Colors</title>
+    <para>G</para>
+</chapter>

Documents/Positioning/Tutorial 03.xml

+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+    <title>OpenGL's Moving Triangle</title>
+    <para>This tutorial will be building off of the previous tutorial. In that tutorial, we had a
+        single, static triangle. Here, we will move it around.</para>
+    <section>
+        <title>Moving the Vertices</title>
+        <para>The simplest way one might think to move a triangle or other object around is to
+            simply modify the vertex position data directly. From the previous tutorial, we learned
+            that the vertex data is stored in a buffer object. This is what
+                <filename>cpuPositionOffset.cpp</filename> does.</para>
+        <para>The modifications are done in two steps. The first step is to generate the X, Y offset
+            that will be applied to each position. The second is to apply that offset to each vertex
+            position. The generation of the offset is done with the
+                <function>ComputePositionOffset</function> function:</para>
+        <example>
+            <title>Computation of Position Offsets</title>
+            <programlisting>void ComputePositionOffsets(float &amp;fXOffset, float &amp;fYOffset)
+{
+    const float fLoopDuration = 5.0f;
+    const float fScale = 3.14159f * 2.0f / fLoopDuration;
+    
+    float fElapsedTime = glutGet(GLUT_ELAPSED_TIME) / 1000.0f;
+    
+    float fCurrTimeThroughLoop = fmodf(fElapsedTime, fLoopDuration);
+    
+    fXOffset = cosf(fCurrTimeThroughLoop * fScale) * 0.5f;
+    fYOffset = sinf(fCurrTimeThroughLoop * fScale) * 0.5f;
+}</programlisting>
+        </example>
+        <para>This function computes offsets in a loop. The offsets produce circular motion, and the
+            offsets will reach the beginning of the circle every 5 seconds (controlled by
+                <varname>fLoopDuration</varname>). The function
+                <function>glutGet(GLUT_ELAPSED_TIME)</function> retrieves the integer time in
+            milliseconds since the application started. The <function>fmodf</function> function
+            computes the floating-point modulus of the time. In lay terms, it takes the first
+            parameter and returns the remainder of the division between that and the second
+            parameter. Thus, it returns a value on the range [0, <varname>fLoopDuration</varname>),
+            which is what we need to create a periodically repeating pattern.</para>
+        <para>The <function>cosf</function> and <function>sinf</function> functions compute the
+            cosine and sine respectively. It isn't important to know exactly how these functions
+            work, but they effectively compute a circle of radius 2. By multiplying by 0.5f, it
+            shrinks the circle down to a radius of 1.</para>
+        <para>Once the offsets are computed, the offsets have to be added to the vertex data. This
+            is done with the <function>AdjustVertexData</function> function:</para>
+        <example>
+            <title>Adjusting the Vertex Data</title>
+            <programlisting>void AdjustVertexData(float fXOffset, float fYOffset)
+{
+    std::vector&lt;float> fNewData(ARRAY_COUNT(vertexPositions));
+    memcpy(&amp;fNewData[0], vertexPositions, sizeof(vertexPositions));
+    
+    for(int iVertex = 0; iVertex &lt; ARRAY_COUNT(vertexPositions); iVertex += 4)
+    {
+        fNewData[iVertex] += fXOffset;
+        fNewData[iVertex + 1] += fYOffset;
+    }
+    
+    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertexPositions), &amp;fNewData[0]);
+    glBindBuffer(GL_ARRAY_BUFFER, 0);
+}</programlisting>
+        </example>
+        <para>This function works by copying the vertex data into a std::vector, then applying the
+            offset to the X and Y coordinates of each vertex. The last three lines are the
+            OpenGL-relevant parts.</para>
+        <para>First, the buffer objects containing the positions is bound to the context. Then the
+            new function <function>glBufferSubData</function> is called to transfer this data to the
+            buffer object.</para>
+        <para>The difference between <function>glBufferData</function> and
+                <function>glBufferSubData</function> is that the SubData function does not
+                <emphasis>allocate</emphasis> memory. <function>glBufferData</function> specifically
+            allocates memory of a certain size; <function>glBufferSubData</function> only transfers
+            data to the already existing memory. Calling <function>glBufferData</function> on a
+            buffer object that has already been allocated tells OpenGL to
+                <emphasis>reallocate</emphasis> this memory, throwing away the previous data and
+            allocating a fresh block of memory. Whereas calling <function>glBufferSubData</function>
+            on a buffer object that has not yet had memory allocated by
+                <function>glBufferData</function> is an error.</para>
+        <para>Think of <function>glBufferData</function> as a combination of
+                <function>malloc</function> and <function>memcpy</function>, while glBufferSubData
+            is purely <function>memcpy</function>.</para>
+        <para>The <function>glBufferSubData</function> function can update only a portion of the
+            buffer object's memory. The second parameter to the function is the byte offset into the
+            buffer object to begin copying to, and the third parameter is the number of bytes to
+            copy. The fourth parameter is our array of bytes to be copied into that location of the
+            buffer object.</para>
+        <para>The last line of the function is simply unbinding the buffer object. It is not
+            strictly necessary, but it is good form to clean up binds after making them.</para>
+        <formalpara>
+            <title>Buffer Object Usage Hints</title>
+            <para>Every time we draw something, we are changing the buffer object's data. OpenGL has
+                a way to tell it that you will be doing something like this, and it is the purpose
+                of the last parameter of <function>glBufferData</function>. This tutorial changed
+                the allocation of the buffer object slightly, replacing:</para>
+        </formalpara>
+        <programlisting>glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STATIC_DRAW);</programlisting>
+        <para>with this:</para>
+        <programlisting>glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STREAM_DRAW);</programlisting>
+        <para>GL_STATIC_DRAW tells OpenGL that you intend to only set the data in this buffer object
+            once. GL_STREAM_DRAW tells OpenGL that you intend to set this data constantly, generally
+            once per frame. These parameters don't mean <emphasis>anything</emphasis> with regard to
+            the API; they are simply hints to the OpenGL implementation. Proper use of these hints
+            can be crucial for getting good buffer object performance. We will see more of these
+            hints later.</para>
+        <para>The rendering function now has become this:</para>
+        <example>
+            <title>Updating and Drawing the Vertex Data</title>
+            <programlisting>void display()
+{
+    float fXOffset = 0.0f, fYOffset = 0.0f;
+    ComputePositionOffsets(fXOffset, fYOffset);
+    AdjustVertexData(fXOffset, fYOffset);
+    
+    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
+    glClear(GL_COLOR_BUFFER_BIT);
+    
+    glUseProgram(theProgram);
+    
+    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+    glEnableVertexAttribArray(positionAttrib);
+    glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, 0);
+    
+    glDrawArrays(GL_TRIANGLES, 0, 3);
+    
+    glDisableVertexAttribArray(positionAttrib);
+    glUseProgram(0);
+    
+    glutSwapBuffers();
+    glutPostRedisplay();
+}</programlisting>
+        </example>
+        <para>The first three lines get the offset and set the vertex data. Everything but the last
+            line is unchanged from the first tutorial. The last line of the function is there to
+            tell FreeGLUT to constantly call <function>display</function>. Ordinarily,
+                <function>display</function> would only be called when the window's size changes or
+            when the window is uncovered. <function>glutPostRedisplay</function> causes FreeGLUT to
+            call <function>display</function> again. Not immediately, but reasonably fast.</para>
+        <para>If you run the tutorial, you will see a smaller triangle (the size was reduced in this
+            tutorial) that slides around in a circle.</para>
+    </section>
+    <section>
+        <title>A Better Way</title>
+        <para>This is fine for a 3-vertex example. But imagine a scene involving millions of
+            vertices. Moving objects this way means having to copy millions of vertices from the
+            original vertex data, add an offset to each of them, and then upload that data to an
+            OpenGL buffer object. And all of that is <emphasis>before</emphasis> rendering. Clearly
+            there must be a better way; games can't possibly do this every frame and still hold
+            decent framerates.</para>
+        <para>Actually for quite some time, they did. In the pre-GeForce 256 days, that was how all
+            games worked. Graphics hardware just took a list of vertices in normalized device
+            coordinate space and rasterized them into fragments and pixels. Granted, in those days,
+            we were talking about maybe 10,000 triangles per frame. And while CPUs have come a long
+            way since then, they haven't scaled with the complexity of graphics scenes.</para>
+        <para>The GeForce 256 (note: not a GT 2xx card, but the very first GeForce card) was the
+            first graphics card that actually did some from of vertex processing. It could store
+            vertices in GPU memory, read them, do some kind of transformation on them, and then send
+            them through the rest of the pipeline. The kinds of transformations that the old GeForce
+            256 could do were quite useful, but fairly simple.</para>
+        <para>Having the benefit of modern hardware and OpenGL 3.x, we have something far more
+            flexible: vertex shaders.</para>
+        <para>Remember what it is that we are doing. We compute an offset. Then we apply that offset
+            to each vertex position. Vertex shaders are given each vertex position. So it makes
+            sense to simply give the vertex shader the offset and let it compute the final vertex
+            position. This is what <filename>vertPositionOffset.cpp</filename> does.</para>
+        <para>The vertex shader used here is as follows:</para>
+        <example>
+            <title>Offsetting Vertex Shader</title>
+            <programlisting>#version 150
+
+in vec4 position;
+uniform vec2 offset;
+
+void main()
+{
+    vec4 totalOffset = vec4(offset.x, offset.y, 0.0, 0.0);
+    gl_Position = position + totalOffset;
+}</programlisting>
+        </example>
+        <para>After defining the input <varname>position</varname>, the shader defines a
+            2-dimensional vector <varname>offset</varname>. But it defines it with the term
+                <literal>uniform</literal>, rather than <literal>in</literal> or
+                <literal>out</literal>. This has a particular meaning.</para>
+        <formalpara>
+            <title>Shaders and Granularity</title>
+            <para>Recall that with each execution of a shader, the shader gets new values for
+                variables defined as <literal>in</literal>. Each time a vertex shader is called, it
+                gets a different set of inputs from the vertex attribute arrays and buffers. That is
+                useful for vertex position data, but it is not what we want for the offset. We want
+                each vertex to use the <emphasis>same</emphasis> offset; a <quote>uniform</quote>
+                offset, if you will.</para>
+        </formalpara>
+        <para>Variables defined as <literal>uniform</literal> do not change at the same frequency as
+            variables defined as <literal>in</literal>. Input variables change with every execution
+            of the shader. Uniform variables (called <glossterm>uniforms</glossterm>) change only
+            between executions of rendering calls. And even then, they only change when the user
+            sets them explicitly. </para>
+        <para>Vertex shader inputs come from vertex attribute array definitions and buffer objects.
+            By contrast, uniforms are set directly on program objects.</para>
+        <para>In order to set a uniform in a program, we need two things. The first is a uniform
+            location. Much like with attributes, you must get an index that refers to the uniform
+            name. In this tutorial, this is done in the <function>InitializeProgram</function>
+            function, with this line:</para>
+        <programlisting>offsetLocation = glGetUniformLocation(theProgram, "offset");</programlisting>
+        <para>The function <function>glGetUniformLocation</function> retrieves the uniform location
+            for the uniform named by the second parameter. Note that, just because a uniform is
+            defined in a shader, GLSL does not <emphasis>have</emphasis> to provide a location for
+            it. It will only if the uniform is actually used in the program, as we see in the vertex
+            shader.</para>
+        <para>Once we have the uniform location, we can set the uniform's value. However, unlike
+            retrieving the uniform location, setting a uniform's value requires that the program be
+            currently in use with <function>glUseProgram</function>. Thus, the rendering code looks
+            like this:</para>
+        <example>
+            <title>Draw with Calculated Offsets</title>
+            <programlisting>void display()
+{
+    float fXOffset = 0.0f, fYOffset = 0.0f;
+    ComputePositionOffsets(fXOffset, fYOffset);
+    
+    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
+    glClear(GL_COLOR_BUFFER_BIT);
+    
+    glUseProgram(theProgram);
+    
+    glUniform2f(offsetLocation, fXOffset, fYOffset);
+    
+    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+    glEnableVertexAttribArray(positionAttrib);
+    glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, 0);
+    
+    glDrawArrays(GL_TRIANGLES, 0, 3);
+    
+    glDisableVertexAttribArray(positionAttrib);
+    glUseProgram(0);
+    
+    glutSwapBuffers();
+    glutPostRedisplay();
+}</programlisting>
+        </example>
+        <para>We use <function>ComputePositionOffsets</function> to get the offsets, and then use
+                <function>glUniform2f</function> to set the uniform's value. The buffer object's
+            data is never changed; the shader simply does the hard work. Which is why those shader
+            stages exist in the first place.</para>
+    </section>
+    <section>
+        <title>More Power to the Shaders</title>
+        <para>It's all well and good that we are no longer having to transform vertices manually.
+            But perhaps we can move more things to the vertex shader. Could it be possible to move
+            all of <function>ComputePositionOffsets</function> to the vertex shader?</para>
+        <para>Well, no. The call to <function>glutGet(GL_ELAPSED_TIME)</function> can't be moved
+            there, since GLSL code cannot directly call C/C++ functions. But everything else can be
+            moved. This is what <filename>vertCalcOffset.cpp</filename> does.</para>
+        <para>This is the first tutorial that loads its shaders from files rather than using
+            hard-coded data in the .cpp file. The vertex program is found in
+                <filename>data\tut2c.vert</filename>.</para>
+        <example>
+            <title>Offset Computing Vertex Shader</title>
+            <programlisting>#version 150
+
+in vec4 position;
+uniform float loopDuration;
+uniform float time;
+
+void main()
+{
+    float timeScale = 3.14159f * 2.0f / loopDuration;
+    
+    float currTime = mod(time, loopDuration);
+    vec4 totalOffset = vec4(
+        cos(currTime * timeScale) * 0.5f,
+        sin(currTime * timeScale) * 0.5f,
+        0.0f,
+        0.0f);
+    
+    gl_Position = position + totalOffset;
+}</programlisting>
+        </example>
+        <para>This shader takes two uniforms: the duration of the loop and the elapsed time.</para>
+        <para>In this shader, we use a number of standard GLSL functions, like
+                <function>mod</function>, <function>cos</function>, and <function>sin</function>. We
+            saw <function>mix</function> in the last tutorial. And these are just the tip of the
+            iceberg; there are a <emphasis>lot</emphasis> of standard GLSL functions
+            available.</para>
+        <para>The rendering code looks quite similar to the previous rendering code:</para>
+        <example>
+            <title>Rendering with Time</title>
+            <programlisting>void display()
+{
+    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
+    glClear(GL_COLOR_BUFFER_BIT);
+    
+    glUseProgram(theProgram);
+    
+    glUniform1f(elapsedTimeUniform, glutGet(GLUT_ELAPSED_TIME) / 1000.0f);
+    
+    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
+    glEnableVertexAttribArray(positionAttrib);
+    glVertexAttribPointer(positionAttrib, 4, GL_FLOAT, GL_FALSE, 0, 0);
+    
+    glDrawArrays(GL_TRIANGLES, 0, 3);
+    
+    glDisableVertexAttribArray(positionAttrib);
+    glUseProgram(0);
+    
+    glutSwapBuffers();
+    glutPostRedisplay();
+}</programlisting>
+        </example>
+        <para>This time, we don't need any code to use the elapsed time; we simply pass it
+            unmodified to the shader.</para>
+        <para>You may be wondering exactly how it is that the <varname>loopDuration</varname>
+            uniform gets set. This is done in our shader initialization routine, and it is done only
+            once:</para>
+        <example>
+            <title>Loading Shaders from Files</title>
+            <programlisting>void InitializeProgram()
+{
+    std::vector&lt;GLuint> shaderList;
+    
+    shaderList.push_back(Framework::LoadShader(GL_VERTEX_SHADER, "calcOffset.vert"));
+    shaderList.push_back(Framework::LoadShader(GL_FRAGMENT_SHADER, "standard.frag"));
+    
+    theProgram = Framework::CreateProgram(shaderList);
+    
+    positionAttrib = glGetAttribLocation(theProgram, "position");
+    elapsedTimeUniform = glGetUniformLocation(theProgram, "time");
+    
+    GLuint loopDurationUnf = glGetUniformLocation(theProgram, "loopDuration");
+    glUseProgram(theProgram);
+    glUniform1f(loopDurationUnf, 5.0f);
+    glUseProgram(0);
+}</programlisting>
+        </example>
+        <para>We get the time uniform as normal with <function>glGetUniformLocation</function>. For
+            the loop duration, we get that in a local variable. Then we immediately set the current
+            program object, set the uniform to a value, and then unset the current program
+            object.</para>
+        <para>Program objects, like all objects that contain internal state, will retain their state
+            unless you explicitly change it. So the value of <varname>loopDuration</varname> will be
+            5.0f in perpetuity; we do not need to set it every frame.</para>
+    </section>
+    <section>
+        <title>Multiple Shaders</title>
+        <para>Well, moving the triangle around is nice and all, but it would also be good if we
+            could do something time-based in the fragment shader. Fragment shaders cannot affect the
+            position of the object, but they can control its color. And this is what
+                <filename>fragChangeColor.cpp</filename> does.</para>
+        <para>The fragment shader in this tutorial is also loaded from the file
+                <filename>data\tut2d.frag</filename>:</para>
+        <example>
+            <title>Time-based Fragment Shader</title>
+            <programlisting>#version 150
+
+out vec4 outputColor;
+
+uniform float fragLoopDuration;
+uniform float time;
+
+const vec4 firstColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
+const vec4 secondColor = vec4(0.0f, 1.0f, 0.0f, 1.0f);
+
+void main()
+{
+    float currTime = mod(time, fragLoopDuration);
+    float currLerp = currTime / fragLoopDuration;
+    
+    outputColor = mix(firstColor, secondColor, currLerp);
+}</programlisting>
+        </example>
+        <para>This function is similar to the periodic loop in the vertex shader (which did not
+            change from the last time we saw it). Instead of using sin/cos functions to compute the
+            coordinates of a circle, interpolates between two colors based on how far it is through
+            the loop. When it is at the start of the loop, the triangle will be
+                <varname>firstColor</varname>, and when it is at the end of the loop, it will be
+                <varname>secondColor</varname>.</para>
+        <para>The standard library function <function>mix</function> performs linear interpolation
+            between two values. Like many GLSL standard functions, it can take vector parameters; it
+            will perform component-wise operations on them. So each of the four components of the
+            two parameters will be linearly interpolated by the 3rd parameter. The third parameter,
+                <varname>currLerp</varname> in this case, is a value between 0 and 1. When it is 0,
+            the return value from <function>mix</function> will be the first parameter; when it is
+            1, the return value will be the second parameter.</para>
+        <para>Here is the program initialization code:</para>
+        <example>
+            <title>More Shader Creation</title>
+            <programlisting>void InitializeProgram()
+{
+    std::vector&lt;GLuint> shaderList;
+    
+    shaderList.push_back(Framework::LoadShader(GL_VERTEX_SHADER, "calcOffset.vert"));
+    shaderList.push_back(Framework::LoadShader(GL_FRAGMENT_SHADER, "calcColor.frag"));
+    
+    theProgram = Framework::CreateProgram(shaderList);
+
+    positionAttrib = glGetAttribLocation(theProgram, "position");
+    elapsedTimeUniform = glGetUniformLocation(theProgram, "time");
+    
+    GLuint loopDurationUnf = glGetUniformLocation(theProgram, "loopDuration");
+    GLuint fragLoopDurUnf = glGetUniformLocation(theProgram, "fragLoopDuration");
+    
+    
+    glUseProgram(theProgram);
+    glUniform1f(loopDurationUnf, 5.0f);
+    glUniform1f(fragLoopDurUnf, 10.0f);
+    glUseProgram(0);
+}</programlisting>
+        </example>
+        <para>As before, we get the uniform locations for <varname>time</varname> and
+                <varname>loopDuration</varname>, as well as the new
+                <varname>fragLoopDuration</varname>. We then set the two loop durations for the
+            program.</para>
+        <para>You may be wondering how the <varname>time</varname> uniform for the vertex shader and
+            fragment shader get set? One of the advantages of the GLSL compilation model, which
+            links vertex and fragment shaders together into a single object, is that uniforms of the
+            same name and type are concatenated. So there is only one uniform location for
+                <varname>time</varname>, and it refers to the uniform in both shaders.</para>
+        <para>The downside of this is that, if you create one uniform in one shader that has the
+            same name as a uniform in a different shader, but a different <emphasis>type</emphasis>,
+            OpenGL will give you a linker error and fail to generate a program. Also, it is possible
+            to accidentally link two uniforms into one. In the tutorial, the fragment shader's loop
+            duration had to be given a different name, or else the two shaders would have shared the
+            same loop duration.</para>
+        <para>In any case, because of this, the rendering code is unchanged. The time uniform is
+            updated each frame with FreeGLUT's elapsed time.</para>
+        <formalpara>
+            <title>Globals in shaders</title>
+            <para>Variables at global scope in GLSL can be defined with certain storage qualifiers:
+                    <literal>const</literal>, <literal>uniform</literal>, <literal>in</literal>, and
+                    <literal>out</literal>. A <literal>const</literal> value works like it does in
+                C99 and C++: the value doesn't change, period. It must have an initializer. An
+                unqualified variable works like one would expect in C/C++; it is a global value that
+                can be changed. GLSL shaders can call functions, and globals can be shared between
+                functions.</para>
+        </formalpara>
+    </section>
+    <section>
+        <title>Vertex Shader Performance</title>
+        <para>These tutorials are simple and should run fast enough, but it is still important to
+            look at the performance implications of various operations. In this tutorial, we present
+            3 ways of moving vertex data: transform it yourself on the CPU and upload it to buffer
+            objects, generate transform parameters on the CPU and have the vertex shader use them to
+            do the transform, and put as much as possible in the vertex shader and only have the CPU
+            provide the most basic parameters. Which is the best to use?</para>
+        <para>This is not an easy question to answer. However, it is almost always the case that CPU
+            transformations will be slower than doing it on the GPU. The only time it won't be is if
+            you need to do the exact same transformations many times within the same frame. And even
+            then, it is better to do the transformations once on the GPU and save the result of that
+            in a buffer object that you will pull from later. This is called transform feedback, and
+            it will be covered in a later tutorial.</para>
+        <para>Between the other two methods, which is better really depends on the specific case.
+            Take our example. In one case, we compute the offset on the CPU and pass it to the GPU.
+            The GPU applies the offset to each vertex position. In the other case, we simply provide
+            a time parameter, and for every vertex, the GPU must compute the <emphasis>exact
+                same</emphasis> offset. This means that the vertex shader is doing a lot of work
+            that all comes out to the same number.</para>
+        <para>Even so, that doesn't mean it's always slower. What matters is the overhead of
+            changing data. Changing a uniform takes time; changing a vector uniform typically takes
+            no more time than changing a single float, due to the way that many cards handle
+            floating-point math. The question is this: what is the cost of doing more complex
+            operations in a vertex shader vs. how often those operations need to be done.</para>
+        <para>The second vertex shader we use, the one that computes the offset itself, does a lot
+            of complex map. Sine and cosine values are <emphasis>not</emphasis> fast to compute.
+            They require quite a few computations to calculate. And since the offset itself doesn't
+            change for each vertex, it would be best to compute the offset on the CPU and pass the
+            offset as a uniform value.</para>
+        <para>And typically, that is how rendering is done much of the time. Vertex shaders are
+            given transformation values that are pre-computed on the CPU. But this does not mean
+            that this is the only or best way to do this. In some cases, it is often useful to
+            compute the offsets via parameterized functions in a vertex shader.</para>
+        <para>This is best done when vertex shader inputs are abstracted away. That is, rather than
+            passing a position, the user passes more general information, and the shader generates
+            the position at a particular time or some other parameter. This can be done for particle
+            systems based on forces; the vertex shader executes the force functions based on time,
+            and is able to thus compute the location of the particle at an arbitrary time.</para>
+    </section>
+    <section>
+        <title>In Review</title>
+        <para>In this tutorial, you have learned about uniform variables in shaders. Uniforms are
+            shader variables that change, not with every shader envocation, but between rendering
+            calls. Uniform values are parameters set by the user to control the behavior of the
+            shader. Setting them requires querying a uniform location as well as setting the program
+            to be in use. Uniform state is stored within a program object and preserved until
+            explicitly changed. A uniform that has the same name and type in two different shader
+            stages within the same linked program is the same uniform; setting it will change it for
+            both stages.</para>
+        <para>You have also learned a little about how to update the contents of a buffer object,
+            though we are <emphasis>far</emphasis> from finished with that subject.</para>
+        <section>
+            <title>Further Study</title>
+            <para>There are several things you can test to see what happens with these
+                tutorials.</para>
+            <itemizedlist>
+                <listitem>
+                    <para>With <filename>tut2c.cpp</filename>, change it so that it draws two
+                        triangles moving in a circle, with one a half
+                            <varname>loopDuration</varname> ahead of the other. Simply change the
+                        uniforms after the <function>glDrawArrays</function> call and then make the
+                            <function>glDrawArrays</function> call again. Add half of the loop
+                        duration to the time before setting it the second time.</para>
+                </listitem>
+                <listitem>
+                    <para>In <filename>tut2d.cpp</filename>, change it so that the fragment program
+                        bounces between <varname>firstColor</varname> and
+                            <varname>secondColor</varname>, rather than popping from
+                            <varname>secondColor</varname> back to first at the end of a loop. The
+                        first-to-second-to-first transition should all happen within a single
+                            <function>fragLoopDuration</function> time interval. In case you are
+                        wondering, GLSL supports the <literal>if</literal> statement, as well as the
+                        ?: operator. For bonus points however, do it without an explicit conditional
+                        statement; feel free to use a sin or cos function to do this.</para>
+                </listitem>
+            </itemizedlist>
+        </section>
+        <section>
+            <title>Functions of Note</title>
+            <glosslist>
+                <glossentry>
+                    <glossterm>glBufferSubData</glossterm>
+                    <glossdef>
+                        <para>This function copies memory from the user's memory address into a
+                            buffer object. This function takes a byte offset into the buffer object
+                            to begin copying, as well as a number of bytes to copy.</para>
+                        <para>When this function returns control to the user, you are free to
+                            immediately deallocate the memory you owned. So you can allocate and
+                            fill a piece of memory, call this function, and immediately free that
+                            memory with no hazardous side effects. OpenGL will not store the pointer
+                            or make use of it later.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glGetUniformLocation</glossterm>
+                    <glossdef>
+                        <para>This function retrieves the location of a uniform of the given name
+                            from the given program object. If that uniform does not exist or wasn't
+                            considered in use by GLSL, then this function returns -1, which is not a
+                            valid uniform location.</para>
+                    </glossdef>
+                </glossentry>
+                <glossentry>
+                    <glossterm>glUniform*</glossterm>
+                    <glossdef>
+                        <para>Sets the given uniform in the program currently in use (set by
+                                <function>glUseProgram)</function> to the given value. This is not
+                            merely one function, but an entire suite of functions that take
+                            different types.</para>
+                    </glossdef>
+                </glossentry>
+            </glosslist>
+        </section>
+    </section>
+    <glossary>
+        <title>Glossary</title>
+        <glossentry>
+            <glossterm>Uniforms</glossterm>
+            <glossdef>
+                <para>These are a class of global variable that can be defined in GLSL shaders. They
+                    represent values that are uniform (unchanging) over the course of a single
+                    rendering operation.</para>
+            </glossdef>
+        </glossentry>
+    </glossary>
+</chapter>

Documents/Tutorial 00/Core Graphics.xml

-<?xml version="1.0" encoding="UTF-8"?>
-<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
-<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
-<article xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
-    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="core_graphics">
-</article>

Documents/Tutorial 00/Tutorial 0.xml

-<?xml version="1.0" encoding="UTF-8"?>
-<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
-<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
-<chapter xml:id="tut_00" xmlns="http://docbook.org/ns/docbook"
-    xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
-    version="5.0">
-    <title>Introduction</title>
-    <para>Unlike most of the tutorials, this tutorial is purely text. There is no source code or
-        project associated with this tutorial.</para>
-    <para>Here, we will be discussing graphical rendering theory and OpenGL. This serves as a primer
-        to the rest of the tutorials.</para>
-    <section>
-        <title>Graphics and Rendering</title>
-        <para>These tutorials are for users with any knowledge of graphics or none. As such, there
-            is some basic background information you need to know before we can start looking at
-            actual OpenGL code.</para>
-        <para>Everything you see on your screen, even the text you are reading right now (assuming
-            you are reading this on an electronic display device, rather than a printout) is simply
-            a two-dimensional array of pixels. If you take a screenshot of something on your screen,
-            and blow it up, it will look very blocky.</para>
-        <!--TODO: Add an image of blocky pixels here.-->
-        <para>Each of these blocks is a <glossterm>pixel</glossterm>. The word <quote>pixel</quote>
-            is derived from the term <quote><acronym>Pic</acronym>ture
-                <acronym>El</acronym>ement</quote>. Every pixel on your screen has a particular
-            color. A two-dimensional array of pixels is called an
-            <glossterm>image</glossterm>.</para>
-        <para>The purpose of graphics of any kind is therefore to determine what color to put in
-            what pixels. This determination is what makes text look like text, windows look like
-            windows, and so forth.</para>
-        <para>Since all graphics are just a two-dimensional array of pixels, how does 3D work? 3D
-            graphics is thus a system of producing colors for pixels that convince you that the
-            scene you are looking at is a 3D world rather than a 2D image. The process of converting
-            a 3D world into a 2D image of that world is called
-            <glossterm>rendering.</glossterm></para>
-        <para>There are several methods of rendering a 3D world. The process used by real-time
-            graphics hardware, such as that found in your computer, involves a very great deal of
-            fakery. This process is called <glossterm>rasterization,</glossterm> and a rendering
-            system that uses rasterization is called a <glossterm>rasterizer.</glossterm></para>
-        <para>In rasterizers, all objects that you see are empty shells. There are techniques that
-            are used to allow you to cut open these empty shells, but this simply replaces part of
-            the shell with another shell that shows what the inside looks like. Everything is a
-            shell.</para>
-        <para>All of these shells are made of triangles. Even surfaces that appear to be round are
-            merely triangles if you look closely enough. There are techniques that generate more
-            triangles for objects that appear closer or larger, so that the viewer can almost never
-            see the faceted silhouette of the object. But they are always made of triangles.</para>
-        <note>
-            <para>Some rasterizers use planar quadrilaterals: four-sided objects, where all of the
-                lines lie in the same plane. One of the reasons that graphics hardware always uses
-                triangles is that all of the lines of triangles are guaranteed to be in the same
-                plane.</para>
-        </note>
-        <para>Objects made from triangles are often called <glossterm>geometry</glossterm>, a
-                <glossterm>model</glossterm> or a <glossterm>mesh</glossterm>; these terms are used
-            interchangeably.</para>
-        <para>The process of rasterization has several phases. These phases are ordered into a
-            pipeline, where triangles enter from the top and a 2D image is filled in at the bottom.
-            This is one of the reasons why rasterization is so amenable to hardware acceleration: it
-            operates on each triangle one at a time, in a specific order.</para>
-        <para>This also means that the order in which meshes are submitted to the rasterizer can
-            affect its output.</para>
-        <para>OpenGL is an API for accessing a hardware-based rasterizer. As such, it conforms to
-            the model for rasterizers. A rasterizer receives a sequence of triangles from the user,
-            performs operations on them, and writes pixels based on this triangle data. This is a
-            simplification of how rasterization works in OpenGL, but it is useful for our
-            purposes.</para>
-        <formalpara>
-            <title>Triangles and Vertices</title>
-            <para>Triangles consist of 3 vertices. A vertex consists of a collection of data. For
-                the sake of simplicity (we will expand upon this later), let us say that this data
-                must contain a point in three dimensional space. Any 3 points that are not on the
-                same line create a triangle, so the smallest information for a triangle consists of
-                3 three-dimensional points.</para>
-        </formalpara>
-        <para>A point in 3D space is defined by 3 numbers or coordinates. An X coordinate, a Y
-            coordinate, and a Z coordinate. These are commonly written with parenthesis, as in (X,
-            Y, Z).</para>
-        <section>
-            <title>Rasterization Overview</title>
-            <para>The rasterization pipeline, particularly for modern hardware, is very complex.
-                This is a very simplified overview of this pipeline. It is necessary to have a
-                simple understanding of the pipeline before we look at the details of rendering
-                things with OpenGL. Those details can be overwhelming without a high level
-                overview.</para>
-            <formalpara>
-                <title>Clip Space Transformation</title>
-                <para>The first phase of rasterization is to transform the vertices of each triangle
-                    into a certain volume of space. Everything within this volume will be rendered
-                    to the output image, and everything that falls outside of this region will not
-                    be. This region corresponds to the view of the world that the user wants to
-                    render, to some degree.</para>
-            </formalpara>
-            <para>The volume that the triangle is transformed into is called, in OpenGL parlance,
-                    <glossterm>clip space</glossterm>. The positions of a vertex of a triangle in
-                clip space are called <glossterm>clip coordinates.</glossterm></para>
-            <para>Clip coordinates are a little different from regular positions. A position in 3D
-                space has 3 coordinates. A position in clip space has <emphasis>four</emphasis>
-                coordinates. The first three are the usual X, Y, Z positions; the fourth is called
-                W. This last coordinate actually defines the extents of clip space is for this
-                vertex.</para>
-            <para>Clip space can actually be different for different vertices. It is a region of 3D
-                space on the range [-W, W] in each of the X, Y, and Z directions. So vertices with a
-                different W coordinate are in a different clip space cube from other vertices. Since
-                each vertex can have an independent W component, each vertex of a triangle exists in
-                its own clip space.</para>
-            <para>In clip space, the positive X direction is to the right, the positive Y direction
-                is up, and the positive Z direction is away from the viewer.</para>
-            <!--TODO: Add an image of clip space here.-->
-            <para>The process of transforming vertices into clip space is quite arbitrary. OpenGL
-                provides a lot of flexibility in this step. We will cover this step in detail
-                throughout the tutorials.</para>
-            <para>Because clip space is the visible transformed version of the world, any triangles
-                that fall outside of this region are discarded. Any triangles that are partially
-                outside of this region undergo a process called <glossterm>clipping.</glossterm>
-                This breaks the triangle apart into a number of smaller triangles, such that the
-                smaller triangles cover the area within clip space. Hence the name <quote>clip
-                    space.</quote></para>
-            <formalpara>
-                <title>Normalized Coordinates</title>
-                <para>Clip space is interesting, but inconvenient. The extent of this space is
-                    different for each vertex, which makes visualizing a triangle rather difficult.
-                    Therefore, clip space is transformed into a more reasonable coordinate space:
-                        <glossterm>normalized device coordinates</glossterm>.</para>
-            </formalpara>
-            <para>This process is very simple. The X, Y, and Z of each vertex's position is divided
-                by W to get normalized device coordinates. That is all.</para>
-            <para>Therefore, the space of normalized device coordinates is essentially just clip
-                space, except that the range of X, Y and Z are [-1, 1]. The directions are all the
-                same. The division by W is an important part of projecting 3D triangles onto 2D
-                images, but we will cover that in a future tutorial.</para>
-            <formalpara>
-                <title>Window Transformation</title>
-                <para>The next phase of rasterization is to transform the vertices of each triangle
-                    again. This time, they are converted from normalized device coordinates to
-                        <glossterm>window coordinates</glossterm>. As the name suggests, window
-                    coordinates are relative to the window that OpenGL is running within.</para>
-            </formalpara>
-            <para>Even though they refer to the window, they are still three dimensional
-                coordinates. The X goes to the right, Y goes up, and Z goes away, just as for clip
-                space. The only difference is that the bounds for these coordinates depends on the
-                viewable window. It should also be noted that while these are in window coordinates,
-                none of the precision is lost. These are not integer coordinates; they are still
-                floating-point values, and thus they have precision beyond that of a single
-                pixel.</para>
-            <para>The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest.
-                Vertex positions outside of this range are not visible.</para>
-            <para>Note that window coordinates have the bottom-left position as the (0, 0) origin
-                point. This is counter to what users are used to in window coordinates, which is
-                having the top-left position be the origin. There are transform tricks you can play
-                to allow you to work in a top-left coordinate space.</para>
-            <para>The full details of this process will be discussed at length as the tutorials
-                progress.</para>
-            <formalpara>
-                <title>Scan Conversion</title>
-                <para>After converting the coordinates of a triangle to window coordinates, the
-                    triangle undergoes a process called <glossterm>scan conversion.</glossterm> This
-                    process takes the triangle and breaks it up based on the arrangement of window
-                    pixels over the output image that the triangle covers.</para>
-            </formalpara>
-            <!--TODO: Show a series of images, starting with a triangle, then overlying it with a pixel grid, followed by one showing
-which pixels get filled in.-->
-            <para>The specifics of which pixels get used and which do not for a triangle is not
-                important. What matters more than that is the fact that if two triangles are
-                perfectly adjacent, such that they share the same input vertex positions, the output
-                rasterization will never have holes or double coverage. Along the shared line, there
-                will be no overlap or holes between the two triangles.</para>
-            <para>The result of scan converting a triangle is a sequence of boxes along the area of
-                the triangle. These boxes are called <glossterm>fragments.</glossterm></para>
-            <para>Each fragment has certain data associated with it. This data contains the 2D
-                location of the fragment in window coordinates, and the Z value of the fragment.
-                This Z value is known as the depth of the fragment. There may be other information
-                that is part of a fragment, and we will expand on that in later tutorials.</para>
-            <formalpara>
-                <title>Fragment Processing</title>
-                <para>This phase takes a fragment from a scan converted triangle and transforms it
-                    into one or more color values and a single depth value. The order that fragments
-                    from a single triangle are processed in is irrelevant; since a single triangle
-                    lies in a single plane, fragments generated from it cannot possible overlap.
-                    However, the fragments from another triangle can possibly overlap. Since order
-                    is important in a rasterizer, the fragments from one triangle must be processed
-                    before the fragments from another triangle.</para>
-            </formalpara>