1. Jason McKesson
  2. gltut

Commits

Jason McKesson  committed 14e92df Merge

Merge

  • Participants
  • Parent commits 3e4cfd0, bcfec25
  • Branches SVG Fix

Comments (0)

Files changed (198)

File Documents/Basics/Tutorial 00.xml

View file
             useful for our purposes.</para>
         <formalpara>
             <title>Triangles and Vertices</title>
-            <para>Triangles consist of 3 vertices. A <glossterm>vertex</glossterm> is of a
+            <para>Triangles consist of 3 vertices. A <glossterm>vertex</glossterm> is a
                 collection of arbitrary data. For the sake of simplicity (we will expand upon this
                 later), let us say that this data must contain a point in three dimensional space.
                 It may contain other data, but it must have at least this. Any 3 points that are not
                 <glossterm>colorspace</glossterm>
                 <glossdef>
                     <para>The set of reference colors that define a way of representing a color in
-                        computer graphics. All colors are defined relative to a particular
+                        computer graphics, and the function mapping between those reference colors
+                        and the actual colors. All colors are defined relative to a particular
                         colorspace.</para>
                 </glossdef>
             </glossentry>

File Documents/Basics/Tutorial 01.xml

View file
         <para>An OpenGL shader object is, as the name suggests, an object. So the first step is to
             create the object with <function>glCreateShader</function>. This function creates a
             shader of a particular type (vertex or fragment), so it takes a parameter that tells
-            what kind of object it creates. Since each state has certain syntax rules and
-            pre-defined variables and constants, the </para>
+            what kind of object it creates. Since each shader stage has certain syntax rules and
+            pre-defined variables and constants (thus making different shader stages different
+            dialects of GLSL), the compiler must be told what shader stage is being compiled.</para>
         <note>
-            <para>Shader and program objects are objects in OpenGL. But they were rather differently
+            <para>Shader and program objects are objects in OpenGL. But they work rather differently
                 from other kinds of OpenGL objects. For example, creating buffer objects, as shown
                 above, uses a function of the form <quote>glGen*</quote> where * is
                     <quote>Buffer</quote>. It takes a number of objects to create and a list to put

File Documents/Illumination/Blinn vs Phong Compare.png

Old
Old image
New
New image

File Documents/Illumination/DarkDayVsNight.png

Added
New image

File Documents/Illumination/Gamma Compare.png

Added
New image

File Documents/Illumination/Gamma Correction.png

Added
New image

File Documents/Illumination/Gamma Lighting.png

Added
New image

File Documents/Illumination/GammaCorrectFunc.mathml

View file
+<?xml version="1.0" encoding="UTF-8"?>
+
+<math xmlns="http://www.w3.org/1998/Math/MathML">
+  <mrow>
+   <mi mathvariant="italic">GammaRGB</mi>
+   <mo>=</mo>
+   <msup>
+    <mi mathvariant="italic">LinearRGB</mi>
+		<mfrac>
+			<mrow>
+				<mn>1</mn>
+			</mrow>
+			<mrow>
+				<mi>γ</mi>
+			</mrow>
+		</mfrac>
+   </msup>
+  </mrow>
+</math>

File Documents/Illumination/GammaCorrectFunc.svg

View file
Added
New image

File Documents/Illumination/GammaFunc.mathml

View file
+<?xml version="1.0" encoding="UTF-8"?>
+
+<math xmlns="http://www.w3.org/1998/Math/MathML">
+  <mrow>
+   <mi mathvariant="italic">LinearRGB</mi>
+   <mo stretchy="false">∝</mo>
+   <msup>
+    <mi mathvariant="italic">Voltage</mi>
+	<mi>γ</mi>
+   </msup>
+  </mrow>
+</math>

File Documents/Illumination/GammaFunc.svg

View file
Added
New image

File Documents/Illumination/GammaFunctionGraph.svg

View file
Added
New image

File Documents/Illumination/GenGammaFunctionGraph.lua

View file
+require "SvgWriter"
+require "vmath"
+require "Viewport"
+require "SubImage"
+require "GridAxis"
+require "_utils"
+
+local imageSize = vmath.vector{600, 600};
+local subImages = SubImage.SubImage({1, 1}, imageSize, {100, 0});
+
+local coordSize = 1;
+
+--Top, right, bottom, left
+local margins = {20, 20, 50, 50}
+
+local viewportSize = imageSize -
+	vmath.vector{margins[2] + margins[4], margins[1] + margins[3]};
+
+local vp = Viewport.Viewport(viewportSize, {0.5, 0.5}, coordSize, {margins[4], margins[1]})
+local trans2 = Viewport.Transform2D()
+vp:SetTransform(trans2);
+
+local styleLib = SvgWriter.StyleLibrary();
+
+local bkgColor = "whitesmoke";
+local gridLineClr = "lightgrey";
+local gridBorderClr = "grey";
+local gridAxisClr = "black";
+local gridHashClr = "black";
+
+local linearRGBClr = "grey";
+local gammaClr = "red";
+local arrowColor = "darksalmon";
+local curveLabelColor = "darkred";
+
+local axisLabelSize = 15;
+local arrowLabelSize = 20;
+local curveLabelSize = 30;
+
+styleLib:AddStyle(nil, "grid-line", SvgWriter.Style{
+	stroke=gridLineClr, fill="none"});
+styleLib:AddStyle(nil, "grid-background", SvgWriter.Style{
+	stroke="none", fill=bkgColor});
+styleLib:AddStyle(nil, "grid-border", SvgWriter.Style{
+	stroke=gridBorderClr, fill="none"});
+styleLib:AddStyle(nil, "grid-axis", SvgWriter.Style{
+	stroke=gridAxisClr, fill="none", stroke_width="2px"});
+styleLib:AddStyle(nil, "grid-axis-hash", SvgWriter.Style{
+	stroke=gridHashClr, fill="none", stroke_width="0.5px"});
+	
+styleLib:AddStyle(nil, "grid-axis-label", SvgWriter.Style{
+	stroke="none", fill="black", font_family="sans-serif", font_size=tostring(axisLabelSize)});
+styleLib:AddStyle(nil, "grid-axis-label-x", SvgWriter.Style{
+	text_anchor="middle"});
+styleLib:AddStyle(nil, "grid-axis-label-y", SvgWriter.Style{
+	text_anchor="end"});
+styleLib:AddStyle(nil, "grid-axis-hash", SvgWriter.Style{
+	text_anchor="end"});
+	
+styleLib:AddStyle(nil, "linear-rgb",
+	SvgWriter.Style{stroke=linearRGBClr, fill="none"});
+styleLib:AddStyle(nil, "gamma-func",
+	SvgWriter.Style{stroke=gammaClr, fill="none", stroke_width="4px",
+		clip_path=SvgWriter.uriLocalElement("graph-clip")});
+styleLib:AddStyle(nil, "gamma-correct-dash",
+	SvgWriter.Style{stroke_dasharray={21, 6}});
+
+styleLib:AddStyle(nil, "pointer-arrow",
+	SvgWriter.Style{stroke=arrowColor, fill="none",
+		stroke_width="3px",
+		marker_start=SvgWriter.uriLocalElement("arrow-rear"),
+		marker_end=SvgWriter.uriLocalElement("arrow-head")});
+styleLib:AddStyle(nil, "pointer-arrow-dash",
+	SvgWriter.Style{stroke_dasharray={14, 5}});
+styleLib:AddStyle(nil, "pointer-arrow-label",
+	SvgWriter.Style{stroke="none", fill=arrowColor,
+		font_family="sans-serif", font_weight="bold",
+		font_size=tostring(arrowLabelSize).."px", text_anchor="end"});
+styleLib:AddStyle(nil, "pointer-arrow-marker",
+	SvgWriter.Style{stroke="none", fill=arrowColor});
+	
+styleLib:AddStyle(nil, "curve-label",
+	SvgWriter.Style{stroke="none", fill=curveLabelColor,
+		font_family="sans-serif", font_weight="bold",
+		font_size=tostring(curveLabelSize).."px"});
+styleLib:AddStyle(nil, "text-left-justified",
+	SvgWriter.Style{text_anchor="start"});
+styleLib:AddStyle(nil, "text-right-justified",
+	SvgWriter.Style{text_anchor="end"});
+
+	
+
+	
+function DrawLines(writer, lineList, ...)
+	local path = SvgWriter.Path();
+	
+	for i = 1, #lineList, 2 do
+		path:M(lineList[i]):L(lineList[i+1]);
+	end
+	
+	writer:Path(path, ...);
+end
+
+local graphBox =
+{
+	vmath.vec2(0, 0),
+	vmath.vec2(1, 0),
+	vmath.vec2(0, 1),
+	vmath.vec2(1, 1),
+};
+
+graphBox = vp:Transform(graphBox);
+
+local gridLines = {}
+
+for i = 1, 9 do
+	local yValue = i / 10;
+	gridLines[#gridLines + 1] = vmath.vec2(0, yValue);
+	gridLines[#gridLines + 1] = vmath.vec2(1, yValue);
+end
+
+gridLines[#gridLines + 1] = vmath.vec2(0.5, 0);
+gridLines[#gridLines + 1] = vmath.vec2(0.5, 1);
+
+gridLines = vp:Transform(gridLines);
+
+local borderLines =
+{
+	vmath.vec2(0, 1),
+	vmath.vec2(1, 1),
+	vmath.vec2(1, 1),
+	vmath.vec2(1, 0),
+}
+
+borderLines = vp:Transform(borderLines);
+
+local axisLines =
+{
+	vmath.vec2(0, 1),
+	vmath.vec2(0, 0),
+	vmath.vec2(0, 0),
+	vmath.vec2(1, 0),
+}
+
+axisLines = vp:Transform(axisLines);
+
+axisLabels = 
+{
+	"0",
+	"0.1",
+	"0.2",
+	"0.3",
+	"0.4",
+	"0.5",
+	"0.6",
+	"0.7",
+	"0.8",
+	"0.9",
+	"1",
+}
+
+local pxAxisLabelOffset =
+{
+	vmath.vec2(0, axisLabelSize + 5),
+	vmath.vec2(-5, (axisLabelSize / 2) - 2),
+};
+
+local hashSize = 5;
+
+
+local linearLines = {vmath.vec2(0, 0), vmath.vec2(1, 1)};
+linearLines = vp:Transform(linearLines);
+
+local maxDistance = 0.05;
+function CalcPath(currXVal, nextXVal, path, gamma)
+	if(nextXVal > 1.0) then
+		nextXVal = 1.0;
+	end
+	local currVal = vmath.vec2(currXVal, currXVal ^ gamma);
+	local nextVal = vmath.vec2(nextXVal, nextXVal ^ gamma);
+	local len = vmath.length(currVal - nextVal);
+	if(len < maxDistance) then
+		path:L(vp:Transform(nextVal));
+		print(nextVal, len, maxDistance);
+		if(nextXVal == 1.0) then
+			return;
+		else
+			return CalcPath(nextXVal, nextXVal + 0.1, path, gamma); --tail call
+		end
+	else
+--		print("here!");
+		return CalcPath(currXVal, currXVal + (nextXVal - currXVal)/2, path, gamma); --tail call
+	end
+end
+
+local gammaFuncPath = SvgWriter.Path()
+local gammaCorrectPath = SvgWriter.Path()
+
+gammaCorrectPath:M(vp:Transform(vmath.vec2(0, 0)));
+CalcPath(0.0, 0.1, gammaCorrectPath, 0.454545);
+
+gammaFuncPath:M(vp:Transform(vmath.vec2(0, 0)));
+CalcPath(0.0, 0.1, gammaFuncPath, 2.2);
+
+--[[
+local numLineSegments = 150;
+for i = 1, numLineSegments do
+	local xVal = (i - 1) / (numLineSegments - 1);
+	local gammaFunc = vmath.vec2(xVal, xVal ^ 2.2);
+	local gammaCorrect = vmath.vec2(xVal, xVal ^ 0.454545);
+	gammaFunc = vp:Transform(gammaFunc);
+	gammaCorrect = vp:Transform(gammaCorrect);
+	if(i == 1) then
+		gammaFuncPath:M(gammaFunc);
+		gammaCorrectPath:M(gammaCorrect);
+	else
+		gammaFuncPath:L(gammaFunc);
+		gammaCorrectPath:L(gammaCorrect);
+	end
+end
+]]
+
+local curveLabels =
+{
+	{{"gamma", "correction", "1/2.2"}, vmath.vec2(0.325, 0.77), vmath.vec2(0, 0), {"curve-label", "text-right-justified"}},
+	{{"CRT", "gamma", "2.2"}, vmath.vec2(0.7, 0.375), vmath.vec2(0, 0), {"curve-label", "text-left-justified"}},
+}
+
+for i, curveLabel in ipairs(curveLabels) do
+	curveLabel[2] = vp:Transform(curveLabel[2]);
+end
+
+local gammaHalf = 0.5 ^ 2.2;
+local pointers =
+{
+	vmath.vec2(0.5, 0.5);
+	vmath.vec2(0.5, gammaHalf);
+	vmath.vec2(gammaHalf, gammaHalf);
+	vmath.vec2(gammaHalf, 0.5);
+};
+
+pointers = vp:Transform(pointers);
+
+local pointerLabels =
+{
+	{"0.5",   vmath.vec2(pointers[1]), vmath.vec2(-15, 0)},
+	{"0.218", vmath.vec2(pointers[2]), vmath.vec2(-15, 0)},
+	{"0.218", vmath.vec2(pointers[3]), vmath.vec2(-15, 0)},
+	{"0.5",   vmath.vec2(pointers[4]), vmath.vec2(-15, 0)},
+}
+
+local sphereRadius = 7;
+local pointerOffset = 4;
+
+pointers[2][2] = pointers[2][2] - (sphereRadius + pointerOffset);
+pointers[4][2] = pointers[4][2] + (sphereRadius + pointerOffset);
+
+local writer = SvgWriter.SvgWriter(ConstructSVGName(arg[0]), {subImages:Size().x .."px", subImages:Size().y .. "px"});
+	writer:StyleLibrary(styleLib);
+	writer:BeginDefinitions();
+		writer:BeginClipPath(nil, "graph-clip");
+			writer:Rect2Pt(graphBox[1], graphBox[4], nil, nil);
+		writer:EndClipPath();
+		writer:BeginMarker({sphereRadius*2, sphereRadius*2},
+			{sphereRadius, sphereRadius}, nil, true, nil, "arrow-rear");
+			writer:Circle({sphereRadius, sphereRadius}, sphereRadius,
+				{"pointer-arrow-marker"});
+		writer:EndMarker();
+		writer:BeginMarker({sphereRadius*3, sphereRadius*2},
+			{sphereRadius*2, sphereRadius}, "auto", true, nil, "arrow-head");
+			writer:Polygon({
+					{sphereRadius*3, sphereRadius},
+					{0, 0},
+					{5, sphereRadius},
+					{0, sphereRadius*2}},
+				{"pointer-arrow-marker"});
+		writer:EndMarker();
+	writer:EndDefinitions();
+
+	--Draw background.
+	writer:Rect2Pt(graphBox[1], graphBox[4], nil, {"grid-background"});
+	DrawLines(writer, gridLines, {"grid-line"});
+	
+	--Draw the functions.
+	DrawLines(writer, linearLines, {"linear-rgb"});
+	writer:Path(gammaFuncPath, {"gamma-func"});
+	writer:Path(gammaCorrectPath, {"gamma-func", "gamma-correct-dash"});
+	
+	--Draw the curve labels
+	for i, label in ipairs(curveLabels) do
+		writer:TextMultiline(label[1], label[2], curveLabelSize, label[4]);
+	end
+
+	--Draw the pointers.
+	writer:Line(pointers[1], pointers[2], {"pointer-arrow"});
+	writer:Line(pointers[3], pointers[4], {"pointer-arrow", "pointer-arrow-dash"});
+	
+	--Draw pointer labels.
+	for i, label in ipairs(pointerLabels) do
+		writer:Text(label[1], label[2] + label[3], {"pointer-arrow-label"});
+	end
+	
+	--Draw the X-axis labels and hashes.
+	local hashes = {};
+	for i = 1, #axisLabels do
+		local currValue = (i - 1) / 10;
+		local currLoc = vp:Transform(vmath.vec2(currValue, 0));
+		hashes[#hashes + 1] = vmath.vec2(currLoc); --copy to preserve
+		currLoc[2] = currLoc[2] + hashSize;
+		hashes[#hashes + 1] = currLoc;
+		writer:Text(axisLabels[i], currLoc + pxAxisLabelOffset[1],
+			{"grid-axis-label", "grid-axis-label-x"});
+	end
+
+	--Draw the Y-axis labels and hashes.
+	for i = 1, #axisLabels do
+		local currValue = (i - 1) / 10;
+		local currLoc = vp:Transform(vmath.vec2(0, currValue));
+		hashes[#hashes + 1] = vmath.vec2(currLoc); --copy to preserve
+		currLoc[1] = currLoc[1] - hashSize;
+		hashes[#hashes + 1] = currLoc;
+		writer:Text(axisLabels[i], currLoc + pxAxisLabelOffset[2],
+			{"grid-axis-label", "grid-axis-label-y"});
+	end
+
+	DrawLines(writer, hashes, {"grid-axis-hash"});
+
+	--Draw border.
+	DrawLines(writer, borderLines, {"grid-border"});
+	DrawLines(writer, axisLines, {"grid-axis"});
+	
+writer:Close();

File Documents/Illumination/HDR Lighting.png

Added
New image

File Documents/Illumination/Light Clipping.png

Added
New image

File Documents/Illumination/Scene Lighting.png

Added
New image

File Documents/Illumination/Tutorial 09.xml

View file
         <para>A surface looks blue under white light because the surface absorbs all non-blue parts
             of the light and only reflects the blue parts. If one were to shine a red light on the
             surface, the surface would appear very dark, as the surface absorbs non-blue light, and
-            the red light doesn't have much non-blue light in it.</para>
+            the red light doesn't have much blue light in it.</para>
         <figure>
             <title>Surface Light Absorption</title>
             <mediaobject>

File Documents/Illumination/Tutorial 10.xml

View file
                 </imageobject>
             </mediaobject>
         </informalequation>
-        <para>However physically correct this equation is, it has certain drawbacks. And this brings
-            us back to the light intensity problem we touched on earlier.</para>
-        <para>Since our lights are clamped on the [0, 1] range, it doesn't take much distance from
-            the light before the contribution from a light to become effectively nil. In reality,
-            with an unclamped range, we could just pump the light's intensity up to realistic
-            values. But we're working with a clamped range.</para>
-        <para>Therefore, a more common attenuation scheme is to use the inverse of just the distance
-            instead of the inverse of the distance squared:</para>
+        <para>This equation computes physically realistic light attenuation for point-lights. But it
+            often doesn't look very good. Lights seem to have a much sharper intensity falloff than
+            one would expect.</para>
+        <para>There is a reason for this, but it is not one we are ready to get into quite yet. What
+            is often done is to simply use the inverse rather than the inverse-square of the
+            distance:</para>
         <equation>
             <title>Light Attenuation Inverse</title>
             <mediaobject>
                 </imageobject>
             </mediaobject>
         </equation>
-        <para>It looks brighter for more distant lights. It isn't physically correct, but so much
-            about our rendering is at this point that it won't be noticed much.</para>
+        <para>It looks brighter at greater distances than the physically correct model. This is fine
+            for simple examples, but as we get more advanced, it will not be acceptable. This
+            solution is really just a stop-gap; the real solution is one that we will discuss in a
+            few tutorials.</para>
         <section>
             <title>Reverse of the Transform</title>
             <para>However, there is a problem. We previously did per-fragment lighting in model

File Documents/Illumination/Tutorial 11.xml

View file
                 but has more other math to it. The two do have a roughness factor that has the same
                 range, (0, 1], and the roughness has the same general meaning in both
                 distributions.</para>
-            <para>If you want to go even farther, investigate the <quote>Cook-Torrance</quote> model
-                of specular reflection. It incorporates several terms. It uses a statistical
-                distribution to determine the number of microfacets oriented in a direction. This
-                distribution can be Gaussian, Beckmann, or some other distribution. It modifies this
-                result based on a geometric component that models microfacet self-shadowing and the
-                possibility for multiple interreflections among a microfaceted surface. And it adds
-                a term to compensate for the Fresnel effect: an effect where specular reflection
-                from a surface is more intense when viewed edge-on than directly top-down.</para>
+            <para>If you want to go even farther, investigate the Cook-Torrance model of specular
+                reflection. It incorporates several terms. It uses a statistical distribution to
+                determine the number of microfacets oriented in a direction. This distribution can
+                be Gaussian, Beckmann, or some other distribution. It modifies this result based on
+                a geometric component that models microfacet self-shadowing and the possibility for
+                multiple interreflections among a microfaceted surface. And it adds a term to
+                compensate for the Fresnel effect: an effect where specular reflection from a
+                surface is more intense when viewed edge-on than directly top-down.</para>
         </section>
         <section>
             <title>GLSL Functions of Note</title>

File Documents/Illumination/Tutorial 12.xml

View file
+<?xml version="1.0" encoding="UTF-8"?>
+<?oxygen RNGSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng" type="xml"?>
+<?oxygen SCHSchema="http://docbook.org/xml/5.0/rng/docbookxi.rng"?>
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
+    xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
+    <?dbhtml filename="Tutorial 12.html" ?>
+    <title>Dynamic Range</title>
+    <para>Thus far, our lighting examples have been fairly prosaic. A single light source
+        illuminating a simple object hovering above flat terrain. This tutorial will demonstrate how
+        to use multiple lights among a larger piece of terrain in a dynamic lighting environment. We
+        will demonstrate how to properly light a scene. This is less about the artistic qualities of
+        how a scene should look and more about how to make a scene look a certain way if that is how
+        you desire it to look.</para>
+    <section>
+        <?dbhtml filename="Tut12 Setting the Scene.html" ?>
+        <title>Setting the Scene</title>
+        <para>The intent for this scene is to be dynamic. The terrain will be large and hilly,
+            unlike the flat plain we've seen in previous tutorials. It will use vertex colors where
+            appropriate to give it terrain-like qualities. There will also be a variety of objects
+            on the terrain, each with its own set of reflective characteristics. This will help show
+            off the dynamic nature of the scene.</para>
+        <para>The very first step in lighting a scene is to explicitly detail what you want; without
+            that, you're probably not going to find your way there. In this case, the scene is
+            intended to be outdoors, so there will be a single massive directional light shining
+            down. There will also be a number of smaller, weaker lights. All of these lights will
+            have animated movement.</para>
+        <para>The biggest thing here is that we want the scene to dynamically change lighting
+            levels. Specifically, we want a full day/night cycle. The sun will sink, gradually
+            losing intensity until it has none. There, it will remain until the dawn of the next
+            day, where it will gain strength until fill and rise again. The other lights should be
+            much weaker in overall intensity than the sun.</para>
+        <para>One thing that this requires is a dynamic ambient lighting range. Remember that the
+            ambient light is an attempt to resolve the global illumination problem: that light
+            bounces around in a scene and can therefore come from many sources. When the sun is at
+            full intensity, the ambient lighting of the scene should be bright as well. This will
+            mean that surfaces facing away from the sunlight will still be relatively bright, which
+            is the case we see outside. When it is night, the ambient light should be virtually nil.
+            Only surfaces directly facing one of the lights should be illuminated.</para>
+        <para>The <phrase role="propername">Scene Lighting</phrase> tutorial demonstrates the first
+            version of attempting to replicate this scene.</para>
+        <figure>
+            <title>Scene Lighting</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="Scene%20Lighting.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>The camera is rotated and zoomed as in prior tutorials. Where this one differs is that
+            the camera's target point can be moved. The <keycap>W</keycap>, <keycap>A</keycap>,
+                <keycap>S</keycap>, and <keycap>D</keycap> keys move the cameras forward/backwards
+            and left/right, relative to the camera's current orientation. The <keycap>Q</keycap> and
+                <keycap>E</keycap> keys raise and lower the camera, again relative to its current
+            orientation. Holding <keycap>Shift</keycap> with these keys will move in smaller
+            increments. You can toggle viewing of the current target point by pressing
+                <keycap>T</keycap>.</para>
+        <para>Because the lighting in this tutorial is very time based, there are specialized
+            controls for playing with time. There are two sets of timers: one that controls the
+            sun's position (as well as attributes associated with this, like the sun's intensity,
+            ambient intensity, etc), and another set of timers that control the positions of other
+            lights in the scene. Commands that affect timers can affect the sun only, the other
+            lights only, or both at the same time.</para>
+        <para>To have timer commands affect only the sun, press <keycap>2</keycap>. To have timer
+            commands affect only the other lights, press <keycap>3</keycap>. To have timer commands
+            affect both, press <keycap>1</keycap>.</para>
+        <para>To rewind time by one second (of real-time), press the <keycap>-</keycap> key. To jump
+            forward one second, press the <keycap>=</keycap> key. To toggle pausing, press the
+                <keycap>p</keycap> key. These commands only affect the currently selected timers.
+            Also, pressing the <keycap>SpaceBar</keycap> will print out the current sun-based time,
+            in 24-hour notation.</para>
+        <section>
+            <title>Materials and UBOs</title>
+            <para>The source code for this tutorial is much more complicated than prior ones. Due to
+                this complexity, it is spread over several files. All of the tutorial projects for
+                this tutorial share the <filename>Scene.h/cpp</filename> and
+                    <filename>Lights.h/cpp</filename> files. The Scene files set up the objects in
+                the scene and render them. This file contains the surface properties of the
+                objects.</para>
+            <para>A lighting function requires two specific sets of parameters: values that
+                represent the light, and values that represent the surface. Surface properties are
+                often called <glossterm>material properties</glossterm>. Each object has its own
+                material properties, as defined in <filename>Scene.cpp</filename>.</para>
+            <para>The scene has 6 objects: the terrain, a tall cylinder in the middle, a
+                multicolored cube, a sphere, a spinning tetrahedron, and a mysterious black obelisk.
+                Each object has its own material properties defined by the
+                    <function>GetMaterials</function> function.</para>
+            <para>These properties are all stored in a uniform buffer object. We have seen these
+                before for data that is shared among several programs; here, we use it to quickly
+                change sets of values. These material properties do not change with time; we set
+                them once and don't change them ever again. This is primarily for demonstration
+                purposes, but it could have a practical effect.</para>
+            <para>Each object's material data is defined as the following struct:</para>
+            <example>
+                <title>Material Uniform Block</title>
+                <programlisting language="glsl">//GLSL
+layout(std140) uniform;
+
+uniform Material
+{
+    vec4 diffuseColor;
+    vec4 specularColor;
+    float specularShininess;
+} Mtl;</programlisting>
+                <programlisting language="cpp">//C++
+struct MaterialBlock
+{
+    glm::vec4 diffuseColor;
+    glm::vec4 specularColor;
+    float specularShininess;
+    float padding[3];
+};</programlisting>
+            </example>
+            <para>The <varname>padding</varname> variable in the C++ definition represents the fact
+                that the GLSL definition of this uniform block will be padded out to a size of 12
+                floats. This is due to the nature of <quote>std140</quote> layout (feel free to read
+                the appropriate section in the OpenGL specification to see why). Note the global
+                definition of <quote>std140</quote> layout; this sets all uniform blocks to use
+                    <quote>std140</quote> layout unless they specifically override it. That way, we
+                don't have to write <quote>layout(std140)</quote> for each of the three uniform
+                blocks we use in each shader file.</para>
+            <para>Also, note the use of <literal>Mtl</literal> at the foot of the uniform block
+                definition. When nothing is placed there, then the names in the uniform block are
+                global. If an identifier is placed there, then that name must be used to qualify
+                access to the names within that block. This allows us to have the <literal>in vec4
+                    diffuseColor</literal> be separate from the material definition's
+                    <varname>Mtl.diffuseColor</varname>.</para>
+            <para>What we want to do is put 6 material blocks in a single uniform buffer. One might
+                naively think that one could simply allocate a buffer object 6 times the
+                    <literal>sizeof(MaterialBlock)</literal>, and simply store the data as a C++
+                array. Sadly, this will not work due to a UBO limitation.</para>
+            <para>When you use <function>glBindBufferRange</function> to bind a UBO, OpenGL requires
+                that the offset parameter, the parameter that tells where the beginning of the
+                uniform block's data is within the buffer object, be aligned to a specific value.
+                What value, you may ask?</para>
+            <para>Welcome to the world of implementation-dependent values. This means that it can
+                (and most certainly will) change depending on what platform you're running on. This
+                code was tested on two different hardware platforms; one has a minimum alignment of
+                64, the other an alignment of 256.</para>
+            <para>To retrieve the implementation-dependent value, we must use a previously-unseen
+                function: <function>glGetIntegerv</function>. This is a function that does one
+                simple thing: gets integer values from OpenGL. However, the meaning of the value
+                retrieved depends on the enumerator passed as the first parameter. Basically, it's a
+                way to have state retrieval functions that can easily be extended by adding new
+                enumerators rather than new function entrypoints.</para>
+            <para>The code that builds the material uniform buffer is as follows:</para>
+            <example>
+                <title>Material UBO Construction</title>
+                <programlisting>int uniformBufferAlignSize = 0;
+glGetIntegerv(GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, &amp;uniformBufferAlignSize);
+
+m_sizeMaterialBlock = sizeof(MaterialBlock);
+m_sizeMaterialBlock += uniformBufferAlignSize -
+	(m_sizeMaterialBlock % uniformBufferAlignSize);
+
+int sizeMaterialUniformBuffer = m_sizeMaterialBlock * MATERIAL_COUNT;
+
+std::vector&lt;MaterialBlock> materials;
+GetMaterials(materials);
+assert(materials.size() == MATERIAL_COUNT);
+
+std::vector&lt;GLubyte> mtlBuffer;
+mtlBuffer.resize(sizeMaterialUniformBuffer, 0);
+
+GLubyte *bufferPtr = &amp;mtlBuffer[0];
+
+for(size_t mtl = 0; mtl &lt; materials.size(); ++mtl)
+	memcpy(bufferPtr + (mtl * m_sizeMaterialBlock), &amp;materials[mtl], sizeof(MaterialBlock));
+
+glGenBuffers(1, &amp;m_materialUniformBuffer);
+glBindBuffer(GL_UNIFORM_BUFFER, m_materialUniformBuffer);
+glBufferData(GL_UNIFORM_BUFFER, sizeMaterialUniformBuffer, bufferPtr, GL_STATIC_DRAW);
+glBindBuffer(GL_UNIFORM_BUFFER, 0);</programlisting>
+            </example>
+            <para>We use <function>glGetIntegerv</function> to retrieve the alignment requirement.
+                Then we compute the size of a material block, plus enough padding to satisfy the
+                alignment requirements. From there, it's fairly straightforward. The
+                    <varname>mtlBuffer</varname> is just a clever way to allocate a block of memory
+                without having to directly use new/delete. And yes, that is perfectly valid and
+                legal C++.</para>
+            <para>When the scene is rendered, it uses <function>glBindBufferRange</function> to bind
+                the proper region within the buffer object for rendering.</para>
+        </section>
+        <section>
+            <title>Lighting</title>
+            <para>The code for lighting is rather more complicated. It uses two aspects of the
+                framework library to do its job: interpolators and timers.
+                    <classname>Framework::Timer</classname> is a generally useful class that can
+                keep track of a looped range of time, converting it into a [0, 1) range. The
+                interpolators are used to convert a [0, 1) range to a particular value based on a
+                series of possible values. Exactly how they work is beyond the scope of this
+                discussion, but some basic information will be presented.</para>
+            <para>The <classname>LightManager</classname> class controls all timers. It has all of
+                the fast-forwarding, rewinding, and so forth controls built into it. It's basic
+                functionality is to compute all of the lighting values for a particular time. It
+                does this based on information given to it by the main tutorial source file,
+                    <filename>Scene Lighting.cpp</filename>. The important values are sent in the
+                    <function>SetupDaytimeLighting</function> function.</para>
+            <example>
+                <title>Daytime Lighting</title>
+                <programlisting language="cpp">SunlightValue values[] =
+{
+    { 0.0f/24.0f, /*...*/},
+    { 4.5f/24.0f, /*...*/},
+    { 6.5f/24.0f, /*...*/},
+    { 8.0f/24.0f, /*...*/},
+    {18.0f/24.0f, /*...*/},
+    {19.5f/24.0f, /*...*/},
+    {20.5f/24.0f, /*...*/},
+};
+
+g_lights.SetSunlightValues(values, 7);
+
+g_lights.SetPointLightIntensity(0, glm::vec4(0.2f, 0.2f, 0.2f, 1.0f));
+g_lights.SetPointLightIntensity(1, glm::vec4(0.0f, 0.0f, 0.3f, 1.0f));
+g_lights.SetPointLightIntensity(2, glm::vec4(0.3f, 0.0f, 0.0f, 1.0f));</programlisting>
+            </example>
+            <para>For the sake of clarity, the actual lighting parameters were removed from the main
+                table. The <classname>SunlightValue</classname> struct defines the parameters that
+                vary based on the sun's position. Namely, the ambient intensity, the sun's light
+                intensity, and the background color. The first parameter of the struct is the time,
+                on the [0, 1) range, when the parameters should have this value. A time of 0
+                represents noon, and a time of 0.5 represents midnight. For clarity's sake, I used
+                24-hour notation (where 0 is noon rather than midnight).</para>
+            <para>We will discuss the actual lighting values later.</para>
+            <para>The main purpose of the <classname>LightManager</classname> is to retrieve the
+                light parameters. This is done by the function
+                    <function>GetLightInformation</function>, which takes a matrix (to transform the
+                light positions and directions into camera space) and returns a
+                    <classname>LightBlock</classname> object. This is an object that represents a
+                uniform block defined by the shaders:</para>
+            <example>
+                <title>Light Uniform Block</title>
+                <programlisting language="glsl">struct PerLight
+{
+    vec4 cameraSpaceLightPos;
+    vec4 lightIntensity;
+};
+
+const int numberOfLights = 4;
+
+uniform Light
+{
+    vec4 ambientIntensity;
+    float lightAttenuation;
+    PerLight lights[numberOfLights];
+} Lgt;</programlisting>
+                <programlisting language="cpp">struct PerLight
+{
+    glm::vec4 cameraSpaceLightPos;
+    glm::vec4 lightIntensity;
+};
+
+const int NUMBER_OF_LIGHTS = 4;
+
+struct LightBlock
+{
+    glm::vec4 ambientIntensity;
+    float lightAttenuation;
+    float padding[3];
+    PerLight lights[NUMBER_OF_LIGHTS];
+};</programlisting>
+            </example>
+            <para>Again, there is the need for a bit of padding in the C++ version of the struct.
+                Also, you might notice that we have both arrays and structs in GLSL for the first
+                time. They work pretty much like C/C++ structs and arrays (outside of pointer logic,
+                since GLSL doesn't have pointers), though arrays have certain caveats.</para>
+        </section>
+        <section>
+            <title>Many Lights Shader</title>
+            <para>In this tutorial, we use 4 shaders. Two of these take their diffuse color from
+                values passed by the vertex shader. The other two use the material's diffuse color.
+                The other difference is that two do specular reflection computations, and the others
+                do not. This represents the variety of our materials.</para>
+            <para>Overall, the code is nothing you haven't seen before. We use Gaussian specular and
+                an inverse-squared attenuation, in order to be as physically correct as we currently
+                can be. One of the big differences is in the <function>main</function>
+                function.</para>
+            <example>
+                <title>Many Lights Main Function</title>
+                <programlisting language="glsl">void main()
+{
+    vec4 accumLighting = diffuseColor * Lgt.ambientIntensity;
+    for(int light = 0; light &lt; numberOfLights; light++)
+    {
+        accumLighting += ComputeLighting(Lgt.lights[light]);
+    }
+    
+    outputColor = accumLighting;
+}</programlisting>
+            </example>
+            <para>Here, we compute the lighting due to the ambient correction. Then we loop over
+                each light and compute the lighting for it, adding it into our accumulated value.
+                Loops and arrays are generally fine.</para>
+            <para>The other trick is how we deal with positional and directional lights. The
+                    <classname>PerLight</classname> structure doesn't explicitly say whether a light
+                is positional or directional. However, the W component of the
+                    <varname>cameraSpaceLightPos</varname> is what we use to differentiate them;
+                this is a time-honored technique. If the W component is 0.0, then it is a
+                directional light; otherwise, it is a point light.</para>
+            <para>The only difference between directional and point lights in the lighting function
+                are attenuation (directional lights don't use attenuation) and how the light
+                direction is computed. So we simply compute these based on the W component:</para>
+            <programlisting>vec3 lightDir;
+vec4 lightIntensity;
+if(lightData.cameraSpaceLightPos.w == 0.0)
+{
+    lightDir = vec3(lightData.cameraSpaceLightPos);
+    lightIntensity = lightData.lightIntensity;
+}
+else
+{
+    float atten = CalcAttenuation(cameraSpacePosition,
+        lightData.cameraSpaceLightPos.xyz, lightDir);
+    lightIntensity = atten * lightData.lightIntensity;
+}</programlisting>
+        </section>
+        <section>
+            <title>Lighting Problems</title>
+            <para>There are a few problems with our current lighting setup. It looks (mostly) fine
+                in daylight. The moving point lights have a small visual effect, but mostly they're
+                not very visible. And this is what one would expect in broad daylight; flashlights
+                don't make much impact in the day.</para>
+            <para>But at night, everything is exceedingly dark. The point lights, the only active
+                source of illumination, are all too dim to be very visible. The terrain almost
+                completely blends into the black background.</para>
+            <para>There is an alternative set of light parameters that corrects this problem. Press <keycombo>
+                    <keycap>Shift</keycap>
+                    <keycap>L</keycap>
+                </keycombo>; that switches to a night-time optimized version (press
+                    <keycap>L</keycap> to switch back to day-optimized lighting). Here, the point
+                lights provide reasonable lighting at night. The ground is still dark when facing
+                away from the lights, but we can reasonably see things.</para>
+            <figure>
+                <title>Darkness, Day vs. Night</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="DarkDayVsNight.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>The problem is that, in daylight, the night-optimized point lights are too
+                powerful. They are very visible and have very strong effects on the scene. Also,
+                they cause some light problems when one of the point lights is in the right
+                position. At around 12:50, find the floating white light near the cube:</para>
+            <figure>
+                <title>Light Clipping</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Light%20Clipping.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>Notice the patch of iridescent green. This is <glossterm>light
+                    clipping</glossterm> or light clamping, and it is usually a very undesirable
+                outcome. It happens when the computed light intensity falls outside of the [0, 1]
+                range, usually in the positive direction (like in this case). The object can't be
+                shown to be brighter, so it becomes a solid color that loses all detail.</para>
+            <para>The obvious solution to our lighting problem is to simply change the point light
+                intensity based on the time of day. However, this is not realistic; flashlights
+                don't actually get brighter at night. So if we have to do something that
+                antithetical to reality, then there's probably some aspect of reality that we aren't
+                properly modelling.</para>
+        </section>
+    </section>
+    <section>
+        <?dbhtml filename="Tut12 High Dynamic Range.html" ?>
+        <title>High Dynamic Range</title>
+        <para>In order to answer this question, we must first determine why flashlights appear
+            brighter at night than in the daytime? Much of the answer has to do with our
+            eyes.</para>
+        <para>The pupil is the hole in our eyes that allows light to pass through it; cameras call
+            this hole the aperture. The hole is small, relative to the world, which helps with
+            resolving an image. However, the quantity of light that passes through the hole depends
+            on how large it is. Our iris's can expand and contract to control the size of the pupil.
+            When the pupil is large, more light is allowed to enter the eye; when the pupil is
+            small, less light can enter.</para>
+        <para>The iris contracts automatically in the presence of bright light, since too much like
+            can damage the retina (the surface of the eye that detects light). However, in the
+            absence of light, the iris slowly relaxes, expanding the pupil. This has the effect of
+            allowing more light to enter the eye, which adds to the apparent brightness of the
+            scene.</para>
+        <para>So what we need is not to change the overall light levels, but instead apply the
+            equivalent of an iris to the final lighting computations of a scene. That is, we
+            determine the overall illumination at a point, but we then filter out some of this light
+            based on a global setting. In dark scenes, we filter less light, and in bright scenes,
+            we filter more light.</para>
+        <para>This overall process is called <glossterm>high dynamic range lighting</glossterm>
+                (<acronym>HDR</acronym>). It is fairly simple and requires very few additional math
+            computations compared to our current model.</para>
+        <note>
+            <para>You may have heard this term in conjunction with pictures where bright objects
+                seem to glow. While HDR is typically associated with that glowing effect, that is a
+                different effect called bloom. It is a woefully over-used effect, and we will
+                discuss how to implement it later. HDR and bloom do interact, but you can use one
+                without the other.</para>
+        </note>
+        <para>The first step is to end the myth that lights themselves have a global maximum
+            brightness. That is, light intensity is no longer on the range [0, 1]; lights can now
+            have any positive intensity value.</para>
+        <para>This also means that our accumulated lighting intensity, the value we originally wrote
+            to the fragment shader output, is no longer on the [0, 1] range. And that poses a
+            problem. We can perform lighting computations with a high dynamic range, but monitors
+            can only display colors on the [0, 1] range. We therefore must map from the HDR to the
+            low dynamic range (<acronym>LDR</acronym>).</para>
+        <para>This part of HDR rendering is called <glossterm>tone mapping.</glossterm> There are
+            many possible tone mapping functions, but we will use one that simulates a flexible
+            aperture. It's quite a simple function, really. First, we pick a maximum intensity value
+            for the scene; intensity values above this will be clamped. Then, we just divide the HDR
+            value by the maximum intensity to get the LDR value.</para>
+        <para>It is the maximum intensity that we will vary. As the sun goes down, the intensity
+            will go down with it. This will allow the sun to be much brighter in the day than the
+            lights, thus overwhelming their contributions to the scene's illumination. But at night,
+            when the maximum intensity is much lower, the other lights will have an apparently
+            higher brightness.</para>
+        <para>This is implemented in the <phrase role="propername">HDR Lighting</phrase>
+            tutorial.</para>
+        <para>This tutorial controls as the previous one, except that the <keycap>K</keycap> key
+            will activate HDR lighting. Pressing <keycap>L</keycap> or <keycombo>
+                <keycap>Shift</keycap>
+                <keycap>L</keycap>
+            </keycombo> will go back to day or night-time LDR lighting from the last tutorial,
+            respectively.</para>
+        <figure>
+            <title>HDR Lighting</title>
+            <mediaobject>
+                <imageobject>
+                    <imagedata fileref="HDR%20Lighting.png"/>
+                </imageobject>
+            </mediaobject>
+        </figure>
+        <para>The code is quite straightforward. We add a floating-point field to the
+                <classname>Light</classname> uniform block and the <classname>LightBlock</classname>
+            struct in C++. Technically, we just steal one of the padding floats, so the size remains
+            the same:</para>
+        <example>
+            <title>HDR LightBlock</title>
+            <programlisting language="cpp">struct LightBlockHDR
+{
+    glm::vec4 ambientIntensity;
+    float lightAttenuation;
+    float maxIntensity;
+    float padding[2];
+    PerLight lights[NUMBER_OF_LIGHTS];
+};</programlisting>
+        </example>
+        <para>We also add a new field to <classname>SunlightValue</classname>: the maximum light
+            intensity. There is also a new function in the <classname>LightManager</classname> that
+            computes the HDR-version of the light block:
+            <function>GetLightInformationHDR</function>. Technically, all of this code was already
+            in <filename>Light.h/cpp</filename>, since these files are shared among all of the
+            tutorials here.</para>
+        <section>
+            <title>Scene Lighting in HDR</title>
+            <para>Lighting a scene in HDR is a different process from LDR. Having a varying maximum
+                intensity value, as well as the ability to use light intensities greater than 1.0
+                change much about how you set up a scene.</para>
+            <para>In this case, everything in the lighting was designed to match up to the daytime
+                version of LDR in the day, and the nighttime version of LDR at night. Once the
+                division by the maximum intensity was taken into account.</para>
+            <table frame="none">
+                <title>Scene Lighting Values</title>
+                <tgroup cols="11">
+                    <colspec colname="c1" colnum="1" colwidth="3.0*"/>
+                    <colspec colname="c2" colnum="2" colwidth="1.0*"/>
+                    <colspec colname="c3" colnum="3" colwidth="1.0*"/>
+                    <colspec colname="c4" colnum="4" colwidth="1.0*"/>
+                    <colspec colname="c5" colnum="5" colwidth="1.0*"/>
+                    <colspec colname="c6" colnum="6" colwidth="1.0*"/>
+                    <colspec colname="c7" colnum="7" colwidth="1.0*"/>
+                    <colspec colname="c8" colnum="8" colwidth="1.0*"/>
+                    <colspec colname="c9" colnum="9" colwidth="1.0*"/>
+                    <colspec colname="c10" colnum="10" colwidth="1.0*"/>
+                    <colspec colname="c11" colnum="11" colwidth="1.0*"/>
+                    <thead>
+                        <row>
+                            <entry/>
+                            <entry namest="c2" nameend="c5" align="center">
+                                <para>HDR</para>
+                            </entry>
+                            <entry namest="c6" nameend="c8" align="center">
+                                <para>LDR Day-optimized</para>
+                            </entry>
+                            <entry namest="c9" nameend="c11" align="center">
+                                <para>LDR Night-optimized</para>
+                            </entry>
+                        </row>
+                    </thead>
+                    <tbody>
+                        <row>
+                            <entry>Noon Sun Intensity</entry>
+                            <entry>1.8</entry>
+                            <entry>1.8</entry>
+                            <entry>1.8</entry>
+                            <entry>(3.0)</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                        </row>
+                        <row>
+                            <entry>Noon Ambient Intensity</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry/>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                        </row>
+                        <row>
+                            <entry>Evening Sun Intensity</entry>
+                            <entry>0.45</entry>
+                            <entry>0.15</entry>
+                            <entry>0.15</entry>
+                            <entry>(1.5)</entry>
+                            <entry>0.3</entry>
+                            <entry>0.1</entry>
+                            <entry>0.1</entry>
+                            <entry>0.3</entry>
+                            <entry>0.1</entry>
+                            <entry>0.1</entry>
+                        </row>
+                        <row>
+                            <entry>Evening Ambient Intensity</entry>
+                            <entry>0.225</entry>
+                            <entry>0.075</entry>
+                            <entry>0.075</entry>
+                            <entry/>
+                            <entry>0.15</entry>
+                            <entry>0.05</entry>
+                            <entry>0.05</entry>
+                            <entry>0.15</entry>
+                            <entry>0.05</entry>
+                            <entry>0.05</entry>
+                        </row>
+                        <row>
+                            <entry>Circular Light Intensity</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry/>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.2</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                            <entry>0.6</entry>
+                        </row>
+                        <row>
+                            <entry>Red Light Intensity</entry>
+                            <entry>0.7</entry>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                            <entry/>
+                            <entry>0.3</entry>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                            <entry>0.7</entry>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                        </row>
+                        <row>
+                            <entry>Blue Light Intensity</entry>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                            <entry>0.7</entry>
+                            <entry/>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                            <entry>0.3</entry>
+                            <entry>0.0</entry>
+                            <entry>0.0</entry>
+                            <entry>0.7</entry>
+                        </row>
+                    </tbody>
+                </tgroup>
+            </table>
+            <para>The numbers in parenthesis represents the max intensity at that time.</para>
+            <para>In order to keep the daytime lighting the same, we simply multiplied the LDR day's
+                sun and ambient intensities by the ratio between the sun intensity and the intensity
+                of one of the lights. This ratio is 3:1, so the sun and ambient intensity is
+                increased by a magnitude of 3.</para>
+            <para>The maximum intensity was derived similarly. In the LDR case, the difference
+                between the max intensity (1.0) and the sum of the sun and ambient intensities is
+                0.2 (1.0 - (0.6 + 0.2)). To maintain this, we set the max intensity to the sum of
+                the ambient and sun intensities, plus 3 times the original ratio.</para>
+            <para>This effectively means that the light, as far as the sun and ambient are
+                concerned, works the same way in HDR daytime as in the LDR day-optimized settings.
+                To get the other lights to work at night, we simply kept their values the same as
+                the LDR night-optimized case.</para>
+        </section>
+    </section>
+    <section>
+        <?dbhtml filename="Tut12 Monitors and Gamma.html" ?>
+        <title>Monitors and Gamma</title>
+        <para>There is one major issue left, and it is one that has been glossed over since the
+            beginning of our look at lighting: your screen.</para>
+        <para>The fundamental assumption underlying all of our lighting equations is the idea that
+            the surface colors and light intensities are all in a linear
+                <glossterm>colorspace</glossterm>. A colorspace defines how we translate from a set
+            of numerical values to actual, real colors that you can see. A colorspace is a
+                <glossterm>linear colorspace</glossterm> if doubling any value in that colorspace
+            results in a color that is twice as bright. The linearity refers to the relationship
+            between values and overall brightness of the resulting color.</para>
+        <para>This assumption can be taken as a given for our data thus far. All of our diffuse and
+            specular color values were given by us, so we can know that they represent values in a
+            linear RGB colorspace. The light intensities are likewise in a linear colorspace. When
+            we multiplied the sun and ambient intensities by 3 in the last section, we were
+            increasing the brightness by 3x. Multiplying the maximum intensity by 3 had the effect
+            of reducing the overall brightness by 3x.</para>
+        <para>There's just one problem. Your screen doesn't work that way. Time for a short history
+            of television/monitors.</para>
+        <para>The original televisions used an electron gun fired at a phosphor surface to generate
+            light and images; this is called a <acronym>CRT</acronym> display (cathode ray tube).
+            The strength of the electron beam determined the brightness of that part of the image.
+            However, the strength of the beam did not vary linearly with the brightness of the
+            image.</para>
+        <para>The easiest way to deal with that in the earliest days of TV was to simply modify the
+            incoming image at the source. TV broadcasts sent image data that was non-linear in the
+            opposite direction of the CRT's normal non-linearity. This resulted in a color
+            reproduction in a linear colorspace.</para>
+        <para>The term for this process, de-linearizing an image to compensate for a non-linear
+            display, is <glossterm>gamma correction</glossterm>.</para>
+        <para>You may be wondering why this matters. After all, odds are, you don't have a CRT-based
+            monitor; you probably have some form of LCD, plasma, LED, or similar technology. So what
+            does the vagaries of CRT monitors matter to you?</para>
+        <para>Because gamma correction is everywhere. It's in DVDs, video-tapes, and Blu-Ray discs.
+            Every digital camera does it. And this is how it has been for a long time. Because of
+            that, you couldn't sell an LCD monitor that tried to do linear color reproduction;
+            nobody would buy it because all media for it (including your OS) was designed and
+            written expecting CRT-style non-linear displays.</para>
+        <para>This means that every non-CRT display must mimic the CRT's non-linearity; this is
+            built into the basic video processing logic of every display device.</para>
+        <para>So for twelve tutorials now, we have been outputting linear RGB values to a display
+            device that expects gamma-corrected non-linear RGB values. But before we started doing
+            lighting, we were just picking nice-looking colors, so it didn't matter. Now that we're
+            doing something vaguely realistic, we need to perform gamma-correction. This will let us
+            see what we've <emphasis>actually</emphasis> been rendering, instead of what our
+            monitor's gamma-correction circuitry has been mangling.</para>
+        <section>
+            <title>Gamma Functions</title>
+            <para>A <glossterm>gamma function</glossterm> is the function mapping linear RGB space
+                to non-linear RGB space. The gamma function for CRT displays was fairly standard,
+                and all non-CRT displays mimic this standard. It is ultimately based on a math
+                function of CRT displays. The strength of the electron beam is controlled by the
+                voltage passed through it. This correlates with the light intensity as
+                follows:</para>
+            <equation>
+                <title>Display Gamma Function</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="GammaFunc.svg" format="SVG"/>
+                    </imageobject>
+                </mediaobject>
+            </equation>
+            <para>This is called a gamma function due to the Greek letter γ (gamma). The input
+                signal directly controls the voltage, so the input signal needed to be corrected for
+                the power of gamma.</para>
+            <para>Modern displays usually have gamma adjustments that allow the user to set the
+                display's gamma. The default is usually a gamma of around 2.2; this is a useful
+                compromise value and an excellent default for our gamma-correction code.</para>
+            <para>So, given the gamma function above, we need to output values from our shader that
+                will result in our original linear values after the gamma function is applied. This
+                is gamma correction, and the function for that is straightforward.</para>
+            <equation>
+                <title>Gamma Correction Function</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="GammaCorrectFunc.svg" format="SVG"/>
+                    </imageobject>
+                </mediaobject>
+            </equation>
+            <para>It would be interesting to see a graph of these functions, to speculate about what
+                we will see in our gamma-correct images.</para>
+            <figure>
+                <title>Gamma Function Graph</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="GammaFunctionGraph.svg" format="SVG"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>Without gamma correction, our linearRGB colors (the diagonal line in the graph)
+                would become the CRT gamma curve at the bottom. This means that what we have been
+                seeing is a <emphasis>severely</emphasis> darkened version of our colors. A
+                linearRGB value of 0.5 drops to an intensity of 0.218; that's more than half of the
+                brightness gone.</para>
+            <para>With proper gamma correction, we can expect to see our scene become much
+                brighter.</para>
+        </section>
+        <section>
+            <title>Gamma in Action</title>
+            <para>Gamma correction is implemented in the <phrase role="propername">Gamma
+                    Correction</phrase> tutorial.</para>
+            <para>The <keycap>K</keycap> key toggles gamma correction. The default gamma value is
+                2.2, but it can be raised and lowered with the <keycap>Y</keycap> and
+                    <keycap>H</keycap> keys respectively.</para>
+            <figure>
+                <title>Gamma Correction</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Gamma%20Correction.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>That is very bright; it uses the same HDR-based lighting environment from the
+                previous tutorials. Let's look at some code.</para>
+            <para>The gamma value is an odd kind of value. Conceptually, it has nothing to do with
+                lighting, per-se. It is a global value across many shaders, so it should be in a UBO
+                somewhere. But it isn't a material parameter; it doesn't change from object to
+                object. In this tutorial, we stick it in the <classname>Light</classname> uniform
+                block and the <classname>LightBlockGamma</classname> struct. Again, we steal a float
+                from the padding:</para>
+            <example>
+                <title>Gamma LightBlock</title>
+                <programlisting language="cpp">struct LightBlockGamma
+{
+    glm::vec4 ambientIntensity;
+    float lightAttenuation;
+    float maxIntensity;
+    float gamma;
+    float padding;
+    PerLight lights[NUMBER_OF_LIGHTS];
+};</programlisting>
+            </example>
+            <para>For the sake of clarity in this tutorial, we send the actual gamma value. For
+                performance's sake, we should send 1/gamma, so that we don't have to do it in every
+                fragment.</para>
+            <para>The gamma is applied in the fragment shader as follows:</para>
+            <example>
+                <title>Fragment Gamma Correction</title>
+                <programlisting language="glsl">accumLighting = accumLighting / Lgt.maxIntensity;
+vec4 gamma = vec4(1.0 / Lgt.gamma);
+gamma.w = 1.0;
+outputColor = pow(accumLighting, gamma);</programlisting>
+            </example>
+            <para>Otherwise, the code is mostly unchanged from the HDR tutorial. Speaking of which,
+                gamma correction does not require HDR per se, but both are necessary for quality
+                lighting results.</para>
+        </section>
+        <section>
+            <title>Gamma Correct Lighting</title>
+            <para>This is what happens when you apply HDR lighting to a scene who's light properties
+                were defined <emphasis>without</emphasis> gamma correction. Look at the scene at
+                night; the point lights are extremely bright, and their lighting effects seem to go
+                farther than before. This last point bears investigating.</para>
+            <para>When we first talked about light attenuation, we said that the correct attenuation
+                function for a point light was an inverse-square relationship with respect to the
+                distance to the light. We also said that this usually looked wrong, so people often
+                used a plain inverse attenuation function.</para>
+            <para>Gamma is the reason for this. Or rather, lack of gamma correction is the reason.
+                Without correcting for the display's gamma function, the attenuation of <inlineequation>
+                    <mathphrase>1/r<superscript>2</superscript></mathphrase>
+                </inlineequation> becomes <inlineequation>
+                    <mathphrase>(1/r<superscript>2</superscript>)<superscript>2.2</superscript></mathphrase>
+                </inlineequation>, which is <inlineequation>
+                    <mathphrase>1/r<superscript>4.4</superscript></mathphrase>
+                </inlineequation>. The lack of proper gamma correction magnifies the effective
+                attenuation of lights. A simple <inlineequation>
+                    <mathphrase>1/r</mathphrase>
+                </inlineequation> relationship looks better without gamma correction because the
+                display's gamma function turns it into something that is much closer to being
+                physically correct: <inlineequation>
+                    <mathphrase>1/r<superscript>2.2</superscript></mathphrase>
+                </inlineequation>.</para>
+            <para>Since this lighting was not designed while looking at gamma correct results, let's
+                look at some scene lighting that was developed that way. Turn on gamma correction
+                and set the gamma value to 2.2 (the default if you did not change it). The press <keycombo>
+                    <keycap>Shift</keycap>
+                    <keycap>L</keycap>
+                </keycombo>:</para>
+            <figure>
+                <title>Gamma Lighting</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Gamma%20Lighting.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>This is more like it.</para>
+            <para>If there is one point you should learn from this exercise, it is this: make sure
+                that you implement gamma correction and HDR <emphasis>before</emphasis> trying to
+                light your scenes. If you don't, then you may have to adjust all of the lighting
+                parameters again. In this case, it wasn't even possible to use simple math to make
+                the lighting work right. This lighting environment was developed from
+                scratch.</para>
+            <para>One thing we can notice when looking at the gamma correct lighting is that proper
+                gamma correction improves shadow details substantially:</para>
+            <figure>
+                <title>Gamma Shadow Details</title>
+                <mediaobject>
+                    <imageobject>
+                        <imagedata fileref="Gamma%20Compare.png"/>
+                    </imageobject>
+                </mediaobject>
+            </figure>
+            <para>These two images use HDR lighting; the one on the left doesn't have gamma
+                correction, and the one on the right does. Notice how easy it is to make out the
+                details in the hills near the triangle on the right.</para>
+            <para>Looking at the gamma function, this makes sense. Without proper gamma correction,
+                fully half of the linearRGB range is shoved into the bottom one-fifth of the
+                available light intensity. That doesn't leave much room for areas that are dark, but
+                not too dark to see anything.</para>
+            <para>As such, gamma correction is a key process for producing color-accurate rendered
+                images.</para>
+        </section>
+    </section>
+    <section>
+        <?dbhtml filename="Tut12 In Review.html" ?>
+        <title>In Review</title>
+        <para>In this tutorial, you have learned the following:</para>
+        <itemizedlist>
+            <listitem>
+                <para>How to build and light a scene containing multiple objects and multiple light
+                    sources.</para>
+            </listitem>
+            <listitem>
+                <para>High dynamic range lighting means using a maximum illumination that can vary
+                    from frame to frame, rather than a single, fixed value.</para>
+            </listitem>
+            <listitem>
+                <para>Color values have a space, just like positions or normals. Lighting equations
+                    work in a linear colorspace, where twice the brightness of a value is achieved
+                    by multiplying its value times two. It is vital for proper imaging results to
+                    make sure that the final result of lighting is in the colorspace that the output
+                    display expects. This process is called gamma correction.</para>
+            </listitem>
+        </itemizedlist>
+        <section>
+            <title>Further Study</title>
+            <para>Try doing these things with the given programs.</para>
+            <itemizedlist>
+                <listitem>
+                    <para>Add a fifth light, a directional light representing the moon, to the
+                        gamma-correct scene. This will require creating another set of interpolators
+                        and expanding the SunlightValues structure to hold the lighting intensity of
+                        the moon. It also means expanding the number of lights the shaders use from
+                        4 to 5 (or removing one of the point lights). The moon should be much less
+                        bright than the sun, but it should still have a noticeable effect on
+                        brightness.</para>
+                </listitem>
+                <listitem>
+                    <para>Play with the ambient lighting intensity in the gamma-correct scene,
+                        particularly in the daytime. A little ambient, even with a maximum intensity
+                        as high as 10, really goes a long way to bringing up the level of brightness
+                        in a scene.</para>
+                </listitem>
+            </itemizedlist>
+        </section>
+        <section>
+            <title>Further Research</title>
+            <para>HDR is a pretty large field. This tutorial covered perhaps the simplest form of
+                tone mapping, but there are many equations one can use. There are tone mapping
+                functions that map the full [0, ∞) range to [0, 1]. This wouldn't be useful for a
+                scene that needs a dynamic aperture size, but if the aperture is static, it does
+                allow the use of a large range of lighting values.</para>
+            <para>When doing tone mapping with some form of variable aperture setting, computing the
+                proper maximum intensity value can be difficult. Having a hard-coded value, even one
+                that varies, works well enough for some scenes. But for scenes where the user can
+                control where the camera faces, it can be inappropriate. In many modern games, they
+                actually read portions of the rendered image back to the CPU and do some
+                computational analysis to determine what the maximum intensity should be for the
+                next frame. This is delayed, of course, but it allows for an aperture that varies
+                based on what the player is currently looking at, rather than hard-coded
+                values.</para>
+            <para>Just remember: pick your HDR and tone mapping algorithms
+                    <emphasis>before</emphasis> you start putting the scene together. If you try to
+                change them mid-stream, you will have to redo a lot of work.</para>
+        </section>
+        <section>
+            <title>OpenGL Functions of Note</title>
+            <glosslist>
+                <glossentry>
+                    <glossterm>glGetIntegerv</glossterm>
+                    <glossdef>
+                        <para>Retrieves implementation-dependent integer values and a number of
+                            context state integer values. There are also
+                                <function>glGetFloatv</function>,
+                            <function>glGetBooleanv</function>, and various other typed
+                                <function>glGet*</function> functions. The number of values this
+                            retrieves depends on the enumerator passed to it.</para>
+                    </glossdef>
+                </glossentry>
+            </glosslist>
+        </section>
+    </section>
+    <section>
+        <?dbhtml filename="Tut12 Glossary.html" ?>
+        <title>Glossary</title>
+        <glosslist>
+            <glossentry>
+                <glossterm>material properties</glossterm>
+                <glossdef>
+                    <para>The set of inputs to the lighting equation that represent the
+                        characteristics of a surface. This includes the surface's normal, diffuse
+                        reflectance, specular reflectance, any specular power values, and so forth.
+                        The source of these values can come from many sources: uniform values for an
+                        object, fragment-shader inputs, or potentially other sources.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>light clipping</glossterm>
+                <glossdef>
+                    <para>Light values drawn to the screen are clamped to the range [0, 1]. When
+                        lighting produces values outside of this range, the light is said to be
+                        clipped by the range. This produces a very bright, flat section that loses
+                        all detail and distinction in the image. It is something best
+                        avoided.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>high dynamic range lighting</glossterm>
+                <glossdef>
+                    <para>Lighting that uses values outside of the [0, 1] range. This allows for the
+                        use of a full range of lighting intensities.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>tone mapping</glossterm>
+                <glossdef>
+                    <para>The process of mapping HDR values to a [0, 1] range. This may or may not
+                        be a linear mapping.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>colorspace</glossterm>
+                <glossdef>
+                    <para>The set of reference colors that define a way of representing a color in
+                        computer graphics, and the function mapping between those reference colors
+                        and the actual colors. All colors are defined relative to a particular
+                        colorspace.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>linear colorspace</glossterm>
+                <glossdef>
+                    <para>A colorspace where the brightness of a color varies linearly with its
+                        values. Doubling the value of a color doubles its brightness.</para>
+                </glossdef>
+            </glossentry>
+            <glossentry>
+                <glossterm>gamma correction</glossterm>
+                <glossdef>
+                    <para>The process of converting from a linear colorspace to a non-linear
+                        colorspace that a display device expects, usually through the use of a power
+                        function. This process ensures that the display produces an image that is
+                        linear.</para>
+                </glossdef>
+            </glossentry>
+        </glosslist>
+        
+    </section>
+</chapter>

File Documents/Positioning/Tutorial 03.xml

View file
             which is what we need to create a periodically repeating pattern.</para>
         <para>The <function>cosf</function> and <function>sinf</function> functions compute the
             cosine and sine respectively. It isn't important to know exactly how these functions
-            work, but they effectively compute a circle of radius 2. By multiplying by 0.5f, it
-            shrinks the circle down to a radius of 1.</para>
+            work, but they effectively compute a circle of diameter 2. By multiplying by 0.5f, it
+            shrinks the circle down to a circle with a diameter of 1.</para>
         <para>Once the offsets are computed, the offsets have to be added to the vertex data. This
             is done with the <function>AdjustVertexData</function> function:</para>
         <example>
             that is <emphasis>before</emphasis> rendering. Clearly there must be a better way; games
             can't possibly do this every frame and still hold decent framerates.</para>
         <para>Actually for quite some time, they did. In the pre-GeForce 256 days, that was how all
-            games worked. Graphics hardware just took a list of vertices in normalized device
-            coordinate space and rasterized them into fragments and pixels. Granted, in those days,
+            games worked. Graphics hardware just took a list of vertices in clip space and rasterized them into fragments and pixels. Granted, in those days,
             we were talking about maybe 10,000 triangles per frame. And while CPUs have come a long
             way since then, they haven't scaled with the complexity of graphics scenes.</para>
         <para>The GeForce 256 (note: not a GT 2xx card, but the very first GeForce card) was the

File Documents/Positioning/Tutorial 04.xml

View file
                 so simple that it has been built into graphics hardware since the days of the
                 earliest 3Dfx card and even prior graphics hardware.</para>
             <para>You might notice that the scaling can be expressed as a division operation
-                (dividing by the reciprocal). And you may recall that the difference between clip
+                (multiplying by the reciprocal). And you may recall that the difference between clip
                 space and normalized device coordinate space is a division by the W coordinate. So
                 instead of doing the divide in the shader, we can simply set the W coordinate of
                 each vertex correctly and let the hardware handle it.</para>
                 positive Z is away.</para>
             <para>Our perspective projection transform will be specific to this space. As previously
                 stated, the projection plane shall be a region [-1, 1] in the X and Y axes, and at a
-                Z value of 0. The projection will be from vertices in the -Z direction onto this
+                Z value of -1. The projection will be from vertices in the -Z direction onto this
                 plane; vertices that have a positive Z value are behind the projection plane.</para>
             <para>Now, we will make one more simplifying assumption: the location of the center of
                 the perspective plane is fixed at (0, 0, -1) in camera space. Therefore, since the
                 </imageobject>
             </mediaobject>
         </equation>
-        <para>The odd spacing is intensional. For laughs, let's add a bunch of meaningless terms
+        <para>The odd spacing is intentional. For laughs, let's add a bunch of meaningless terms
             that don't change the equation, but starts to develop an interesting pattern:</para>
         <equation>
             <title>Camera to Clip Expanded Equations</title>
         </example>
         <para>A 4x4 matrix contains 16 values. So we start by creating an array of 16 floating-point
             numbers called <varname>theMatrix</varname>. Since most of the values are zero, we can
-            just set the whole thing to zero.</para>
+            just set the whole thing to zero. This works because IEEE 32-bit floating-point numbers
+            represent a zero as 4 bytes that all contain zero.</para>
         <para>The next few functions set the particular values of interest into the matrix. Before
             we can understand what's going on here, we need to talk a bit about ordering.</para>
         <para>A 4x4 matrix is technically 16 values, so a 16-entry array can store a matrix. But
                         vector. Swizzle operations look like this:</para>
                     <programlisting>vec2 firstVec;
 vec4 secondVec = firstVec.xyxx;
-vec3 thirdVec = secondVec.wzyx;</programlisting>
+vec3 thirdVec = secondVec.wzy;</programlisting>
                     <para>Swizzle selection is, in graphics hardware, considered an operation so
                         fast as to be instantaneous. That is, graphics hardware is built with
                         swizzle selection in mind.</para>

File Documents/Positioning/Tutorial 05.xml

View file
             <para>These are the most common depth testing parameters. It turns on depth testing,
                 sets the test function to less than or equal to, and sets the range mapping to the
                 full accepted range.</para>
-            <para>It is comment to use <literal>GL_LEQUAL</literal> instead of
+            <para>It is common to use <literal>GL_LEQUAL</literal> instead of
                     <literal>GL_LESS</literal>. This allows for the use of multipass algorithms,
                 where you render the same geometry with the same vertex shader, but linked with a
                 different fragment shader. We'll look at those much, much later.</para>

File Documents/Positioning/Tutorial 06.xml

View file
                 <para>Callee-save is probably a better convention to use. With caller-save, a
                     function that takes a matrix stack must be assumed to modify it (if it takes the
                     object as a non-const reference), so it will have to do a push/pop. Whereas with
-                    callee-save, you only push/pop as you explicitly need: at the cite where you are
+                    callee-save, you only push/pop as you explicitly need: at the site where you are
                     modifying the matrix stack. It groups the code together better.</para>
             </sidebar>
         </section>

File Documents/Positioning/Tutorial 07.xml

View file
             <para>Spherical coordinates are three dimensional, so they have 3 values. One value,
                 commonly given the name <quote>r</quote> (for radius) represents the distance of the
                 coordinate from the center of the coordinate system. This value is on the range [0,
-                ∞). The second value, called <quote>φ</quote> (rho), represents the angle in the
+                ∞). The second value, called <quote>φ</quote> (phi), represents the angle in the
                 elliptical plane. This value extends on the range [0, 360). The third value, called
                     <quote>θ</quote> (theta), represents the angle above and below the elliptical
                 plane. This value is on the range [0, 180], where 0 means straight up and 180 means
 }</programlisting>
             </example>
             <para>The global variable <varname>g_sphereCamRelPos</varname> contains the spherical
-                coordinates. The X value contains Rho, the Y value contains Theta, and the Z value
-                is the radius.</para>
+                coordinates. The X value contains φ, the Y value contains θ, and the Z value is the
+                radius.</para>
             <para>The Theta value used in our spherical coordinates is slightly different from the
                 usual. Instead of being on the range [0, 180], it is on the range [-90, 90]; this is
                 why there is an addition by 90 degrees before computing the Theta angle in
                     <glossterm>glBindBufferRange</glossterm>
                     <glossdef>
                         <para>Binds a buffer object to a particular indexed location, as well as
-                            binding it to a target. When used with GL_UNIFORM_BUFFER, it binds the
+                            binding it to the given. When used with GL_UNIFORM_BUFFER, it binds the
                             buffer object to a particular uniform buffer binding point. It has range
                             parameters that can be used to effectively bind part of the buffer
                             object to an indexed location.</para>

File Documents/Tools/SubImage.lua

View file
 				  subImageHSpacing, subImageVSpacing)
 
 	if(not subImageHeight) then
-		subImageVSpacing = subImageWidth.y;
-		subImageHSpacing = subImageWidth.x;
-		subImageHeight = subImagesY.y;
-		subImageWidth = subImagesY.x;
-		subImagesY = subImagesX.y;
-		subImagesX = subImagesX.x;
+		subImageVSpacing = subImageWidth[2];
+		subImageHSpacing = subImageWidth[1];
+		subImageHeight = subImagesY[2];
+		subImageWidth = subImagesY[1];
+		subImagesY = subImagesX[2];
+		subImagesX = subImagesX[1];