point, light position, and surface normal must all be in the same space for this equation to

+ <?dbhtml filename="Tut09 Vertex Point Lighting.html" ?>

<title>Vertex Point Lighting</title>

<para>Thus far, we have computed the lighting equation at each vertex and interpolated the

results across the surface of the triangle. We will continue to do so for point lights.

<function>normalize</function> call is just to convert it into a unit vector.</para>

+ <?dbhtml filename="Tut09 Interpolation.html" ?>

<title>Interpolation</title>

<para>As you can see, doing point lighting is quite simple. Unfortunately, the visual

<para>And our lighting equation is this:</para>

<!--TODO: eq: D * I * dot(L, N)-->

<para>If the surface normal N is being interpolated, then at any particular point on the

- surface, we get this equation:</para>

+ surface, we get this equation for a directional light (the light direction L does not

<!--TODO: D * I * dot(L, Na& + Nb(1-&))-->

<para>The dot product is distributive, like scalar multiplication. So we can distribute the

L to both sides of the dot product term:</para>

+ <?dbhtml filename="Tut09 Fragment Lighting.html" ?>

<title>Fragment Lighting</title>

<para>So, in order to deal with interpolation artifacts, we need to interpolate the actual

light direction and normal, instead of just the results of the lighting equation. This

<para>The <phrase role="propername">Fragment Point Lighting</phrase> tutorial shows off how

fragment lighting works.</para>

+ <para>This tutorial is controlled as before, with a few exceptions. Pressing the

+ <keycap>t</keycap> key will toggle a scale factor onto to be applied to the

+ cylinder, and pressing the <keycap>h</keycap> key will toggle between per-fragment

+ lighting and per-vertex lighting.</para>

+ <!--TODO: Show a picture of the tutorial.-->

+ <para>Much better.</para>

+ <para>The rendering code has changed somewhat, considering the use of model space for

+ lighting instead of camera space. The start of the rendering looks as follows:</para>

+ <title>Initial Per-Fragment Rendering</title>

+ <programlisting language="cpp">Framework::MatrixStack modelMatrix;

+modelMatrix.SetMatrix(g_mousePole.CalcMatrix());

+const glm::vec4 &worldLightPos = CalcLightPosition();

+glm::vec4 lightPosCameraSpace = modelMatrix.Top() * worldLightPos;</programlisting>

+ <para>The new code is the last line, where we transform the world-space light into camera

+ space. This is done to make the math much easier. Since our matrix stack is building up

+ the transform from model to camera space, the inverse of this matrix would be a

+ transform from camera space to model space. So we need to put our light position into

+ camera space before we transform it by the inverse.</para>

+ <para>After doing that, it uses a variable to switch between per-vertex and per-fragment

+ lighting. This just selects which shaders to use; both sets of shaders take the same

+ uniform values, even though they use them in different program stages.</para>

+ <para>The ground plane is rendered with this code:</para>

+ <title>Ground Plane Per-Fragment Rendering</title>

+ <programlisting language="cpp">glUseProgram(pWhiteProgram->theProgram);

+glUniformMatrix4fv(pWhiteProgram->modelToCameraMatrixUnif, 1, GL_FALSE,

+ glm::value_ptr(modelMatrix.Top()));

+glm::mat4 invTransform = glm::inverse(modelMatrix.Top());

+glm::vec4 lightPosModelSpace = invTransform * lightPosCameraSpace;

+glUniform3fv(pWhiteProgram->modelSpaceLightPosUnif, 1, glm::value_ptr(lightPosModelSpace));

+glUseProgram(0);</programlisting>

+ <para>We compute the inverse matrix using <function>glm::inverse</function> and store it.

+ Then we use that to compute the model space light position and pass that to the shader.

+ Then the plane is rendered.</para>

+ <para>The cylinder is rendered using similar code. It simply does a few transformations to

+ the model matrix before computing the inverse and rendering.</para>

+ <para>The shaders are where the real action is. As with previous lighting tutorials, there

+ are two sets of shaders: one that take a per-vertex color, and one that uses a constant

+ white color. The vertex shaders that do per-vertex lighting computations should be

+ <title>Model Space Per-Vertex Lighting Vertex Shader</title>

+ <programlisting language="glsl">#version 330

+layout(location = 0) in vec3 position;

+layout(location = 1) in vec4 inDiffuseColor;

+layout(location = 2) in vec3 normal;

+uniform vec3 modelSpaceLightPos;

+uniform vec4 lightIntensity;

+uniform vec4 ambientIntensity;

+uniform mat4 cameraToClipMatrix;

+uniform mat4 modelToCameraMatrix;

+ gl_Position = cameraToClipMatrix * (modelToCameraMatrix * vec4(position, 1.0));

+ vec3 dirToLight = normalize(modelSpaceLightPos - position);

+ float cosAngIncidence = dot( normal, dirToLight);

+ cosAngIncidence = clamp(cosAngIncidence, 0, 1);

+ interpColor = (lightIntensity * cosAngIncidence * inDiffuseColor) +

+ (ambientIntensity * inDiffuseColor);

+ <para>The main differences between this version and the previous version are simply what one

+ would expect from the change from camera-space lighting to model space lighting. The

+ per-vertex inputs are used directly, rather than being transformed into camera space.

+ There is a second version that omits the <varname>inDiffuseColor</varname> input.</para>

+ <para>With per-vertex lighting, we have two vertex shaders:

+ <filename>ModelPosVertexLighting_PCN.vert</filename> and

+ <filename>ModelPosVertexLighting_PN.vert</filename>. With per-fragment lighting, we

+ also have two shaders: <filename>FragmentLighting_PCN.vert</filename> and

+ <filename>FragmentLighting_PN.vert</filename>. They are disappointingly

+ <title>Model Space Per-Fragment Lighting Vertex Shader</title>

+ <programlisting language="glsl">#version 330

+layout(location = 0) in vec3 position;

+layout(location = 1) in vec4 inDiffuseColor;

+layout(location = 2) in vec3 normal;

+out vec3 modelSpacePosition;

+uniform mat4 cameraToClipMatrix;

+uniform mat4 modelToCameraMatrix;

+ gl_Position = cameraToClipMatrix * (modelToCameraMatrix * vec4(position, 1.0));

+ modelSpacePosition = position;

+ diffuseColor = inDiffuseColor;

+ <para>Since our lighting is done in the fragment shader, there isn't much to do except pass

+ variables through and set the output clip-space position. The version that takes no

+ diffuse color just passes a <type>vec4</type> containing just 1.0.</para>

+ <para>The fragment shader is much more interesting:</para>

+ <title>Per-Fragment Lighting Fragment Shader</title>

+ <programlisting language="glsl">#version 330

+in vec3 modelSpacePosition;

+uniform vec3 modelSpaceLightPos;

+uniform vec4 lightIntensity;

+uniform vec4 ambientIntensity;

+ vec3 lightDir = normalize(modelSpaceLightPos - modelSpacePosition);

+ float cosAngIncidence = dot(normalize(vertexNormal), lightDir);

+ cosAngIncidence = clamp(cosAngIncidence, 0, 1);

+ outputColor = (diffuseColor * lightIntensity * cosAngIncidence) +

+ (diffuseColor * ambientIntensity);

+ <para>The math is essentially identical between the per-vertex and per-fragment case. The

+ main difference is the normalization of <varname>vertexNormal</varname>. This is

+ necessary because interpolating between two unit vectors does not mean you will get a

+ unit vector after interpolation. Indeed, interpolating the 3 components guarantees that

+ you will not get a unit vector.</para>

+ <title>Caught in a Lie</title>

+ <para>While this may look perfect, there is still one problem. Use the <keycombo>

+ </keycombo> key to move the light really close to the cylinder, but without putting

+ the light inside the cylinder. You should see something like this:</para>

+ <!--TODO: Picture of the cylinder with a close light.-->

+ <para>This looks like the same problem we had before. Wasn't doing lighting at the

+ fragment level supposed to fix this?</para>

+ <para>Actually, this is a completely different problem. And it is one that is

+ essentially impossible to solve. Well, not without changing our geometry.</para>

+ <para>The source of the problem is this: the light finally caught us in our lie.

+ Remember what we are actually doing. We are not rendering a true cylinder; we are

+ rendering a group of flat triangles that we attempt to make

+ <emphasis>appear</emphasis> to be a round cylinder. We are rendering a polygonal

+ approximation of a cylinder, then using our lighting computations to make it seem

+ like the faceted shape is really round.</para>

+ <para>This works quite well when the light is fairly far from the surface. But when the

+ light is very close, as it is here, it reveals our fakery for what it is.

+ <para>Let's take a top-down view of our cylinder approximation, but let's also draw what

+ the actual cylinder would look like:</para>

+ <!--TODO: Show a diagram of a faceted cylinder inscribed within a circle.-->

+ <para>Now, consider our lighting equation at a particular point on the fake

+ <!--TODO: Extend the previous diagram with two lights, one close and one far. Also show a point on the fake

+cylinder, and a normal. And show the corresponding point/normal.-->

+ <para>The problem comes from the difference between the actual position being rendered

+ and the corresponding position on the circle that has the normal that we are

+ pretending our point has. When the light is somewhat distant, the difference between

+ the actual light direction we use, and the one we would use on a real cylinder is

+ pretty small. But when the light is very close, the difference is

+ <para>The lighting computations are correct for the vertices near the edges of a

+ triangle. But the farther from the edges you get, the more incorrect it

+ <para>The key point is that there isn't much we can do about this problem. The only

+ solution is to add more vertices to the approximation of the cylinder. It should

+ also be noted that adding more triangles would also make per-vertex lighting look

+ more correct. Thus making the whole exercise in using per-fragment lighting somewhat

+ pointless; if the mesh is fine enough so that each vertex effectively becomes a

+ fragment, then there is no difference between them.</para>

+ <para>For our simple case, adding triangles is easy, since a cylinder is a

+ mathematically defined object. For a complicated case like a human being, this is

+ usually rather more difficult. It also takes up more performance, since there are

+ more vertices, more executions of our (admittedly simple) vertex shader, and more

+ triangles rasterized or discarded for being back-facing.</para>

- <title>Distance and Points</title>

+ <?dbhtml filename="Tut09 Distant Points of Light.html" ?>

+ <title>Distant Points of Light</title>

+ <para>There is another issue with our current example. Use the <keycap>i</keycap> key to

+ raise the light up really high. Notice how bright all of the upwardly-facing surfaces

+ <!--TODO: Show a picture of a high light's effect on the surface.-->

+ <para>You probably have no experience with this in real life. Holding a light farther from

+ the surface in reality does not make the light brighter. So obviously something is

+ happening in reality that our simple lighting model is not accounting for.</para>

+ <para>In reality, lights emit a certain quantity of light per unit time. For a point-like

+ light such as a light bulb, it emits this light radially, in all directions. The farther

+ from the light source one gets, the more area that this must ultimately cover.</para>

+ <para>Light is essentially a wave. The farther away from the source of the wave, the less

+ intense the wave is. For light, this is called <glossterm>light

+ attenuation.</glossterm></para>

+ <para>Our model does not include light attenuation, so let's fix that.</para>

+ <para>Attenuation is a well-understood physical phenomenon. In the absence of other factors

+ (atmospheric light scattering, etc), the light intensity varies with the inverse of the

+ square of the distance. An object 2 units away from the light feels like with one-fourth

+ the intensity. So our equation for light attenuation is as follows:</para>

+ <!--TODO: An equation for light attenuation. AttenLight = Light * (1 / (1.0 + k * r^2))-->

+ <para>There is a constant in the equation, which is used for unit correction. Of course, we

+ can (and will) use it as a fudge factor to make things look right.</para>

+ <para>The constant can take on a physical meaning. If you want to specify a distance at

+ which half of the light intensity is lost, <varname>k</varname> simply becomes:</para>

+ <!--TODO: An equation for k = 1/khalf^2-->

+ <para>However physically correct this equation is, it has certain drawbacks. And this brings

+ us back to the light intensity problem we touched on earlier.</para>

+ <para>Since our lights are clamped on the [0, 1] range, it doesn't take much distance from

+ the light before the contribution from a light to become effectively nil. In reality,

+ with an unclamped range, we could just pump the light's intensity up to realistic

+ values. But we're working with a clamped range.</para>

+ <para>Therefore, a more common attenuation scheme is to use the inverse of just the distance

+ instead of the inverse of the distance squared:</para>

+ <!--TODO: Light equation with 1/(1.0 + kr)-->

+ <para>It looks brighter for more distant lights. It isn't physically correct, but so much

+ about our rendering is at this point that it won't be noticed much.</para>

+ <title>Reverse of the Transform</title>

+ <para>However, there is a problem. We previously did per-fragment lighting in model

+ space. And while this is a perfectly useful space to do lighting in, model space is

+ not world space.</para>

+ <para>We want to specify the attenuation constant factor in terms of world space

+ distances. But we aren't dealing in world space; we are in model space. And model

+ space distances are, naturally, in model space, which may well be scaled relative to

+ world space. Here, any kind of scale in the model-to-world transform is a problem,

+ not just non-uniform scales. Although if there was a uniform scale, we could apply

+ theoretically apply the scale to the attenuation constant.</para>

+ <para>So now we cannot use model space. Fortunately, camera space is a space that has

+ the same scale as world space, just with a rotation/translation applied to it. So we

+ can do our lighting in that space.</para>

+ <para>Doing it in camera space requires computing a camera space position and passing it

+ to the fragment shader to be interpolated. And while we could do this, that's not

+ clever enough. Isn't there some way to get around that?</para>

+ <para>Yes, there is. Recall <varname>gl_FragCoord</varname>, an intrinsic value given to

+ every fragment shader. It represents the location of the fragment in window space.

+ So instead of transforming from model space to camera space, we will transform from

+ window space to camera space.</para>

+ <para>The use of this reverse-transformation technique here should not be taken as a

+ suggestion to use it in all, or even most cases like this. In all likelihood, it

+ will be much slower than just passing the camera space position to the fragment

+ shader. It is here primarily for demonstration purposes; it is useful for other

+ techniques that we will see in the future.</para>

+ <para>The sequence of transformations that take a position from camera space to window

+ space is as follows:</para>

+ <!--TODO: The sequence of transforms to get gl_FragCoord from camera space.-->

+ <para>Therefore, given <varname>gl_FragCoord</varname>, we will need to perform the

+ reverse of these:</para>

+ <!--TODO: Invert the sequence and the functions.-->

+ <para>In order for our fragment shader to perform this transformation, it must be given

+ the following values:</para>

+ <para>The inverse projection matrix.</para>

+ <para>The viewport width/height.</para>

+ <para>The depth range.</para>

+ <title>Applied Attenuation</title>

+ <para>The <phrase role="propername">Fragment Attenuation</phrase> tutorial performs

+ per-fragment attenuation, both with linear and quadratic attenuation.</para>

+ <!--TODO: Show a picture of the tutorial.-->

+ <para>This tutorial controls as before, with the following exceptions. The

+ <keycap>O</keycap> and <keycap>U</keycap> keys increase and decrease the

+ attenuation constant. However, remember that decreasing the constant makes the

+ attenuation less, which makes the light appear <emphasis>brighter</emphasis> at a

+ particular distance. Using the shift key in combination with them will

+ increase/decrease the attenuation by smaller increments. The <keycap>H</keycap> key

+ swaps between the linear and quadratic interpolation functions.</para>

+ <para>The drawing code is mostly the same as we saw in the per-vertex point light

+ tutorial, since both this and that one perform lighting in camera space. The vertex

+ shader is also nothing new; passes the vertex normal and color to the fragment

+ shader. The vertex normal is multiplied by the normal matrix, which allows us to use

+ non-uniform scaling.</para>

+ <title>New Uniform Types</title>

+ <para>The more interesting part is the fragment shader. The definitions are not much

+ changed from the last one, but there have been some additions:</para>

+ <title>Light Attenuation Fragment Shader Definitions</title>

+ <programlisting language="glsl">uniform mat4 clipToCameraMatrix;

+uniform ivec2 windowSize;

+uniform vec2 depthRange;

+uniform float lightAttenuation;

+uniform bool bUseRSquare;</programlisting>

+ <para>The first three lines are the information we need to perform the

+ previously-discussed reverse-transformation, so that we can turn

+ <varname>gl_FragCoord</varname> into a camera-space position. Notice that the

+ <varname>windowSize</varname> uses a new type: <type>ivec2</type>. This is a

+ 2-dimensional vector of integers.</para>

+ <para>Thus far, we have only modified uniforms of floating-point type. All of the vector

+ uploading functions we have used have been of the forms

+ <function>glUniform#f</function> or <function>glUniform#fv</function>, where the

+ # is a number 1-4. The <quote>v</quote> represents a vector version, which takes a

+ pointer instead of 1-4 parameters.</para>

+ <para>Uploading to an integer uniform uses functions of the form

+ <function>glUniform#i</function> and <function>glUniform#iv</function>. So the

+ code we use to set the window size uniform is this:</para>

+ <programlisting language="cpp">glUniform2i(g_FragWhiteDiffuseColor.windowSizeUnif, w, h);</programlisting>

+ <para>The <varname>w</varname> and <varname>h</varname> values are the width and height

+ passed to the <function>reshape</function> function. That is where this line of code

+ is, and that is where we set both the clip-to-camera matrix and the window size. The

+ <varname>depthRange</varname> is set in the initialization code, since it never

+ <para>The <varname>lightAttenuation</varname> uniform is just a float, but

+ <varname>bUseRSquare</varname> uses a new type: boolean.</para>

+ <para>GLSL has the <type>bool</type> type just like C++ does. The

+ <literal>true</literal> and <literal>false</literal> values work just like C++'s

+ equivalents. Where they differ is that GLSL also has vectors of bools, called

+ <type>bvec#</type>, where the # can be 2, 3, or 4. We don't use that here, but

+ it is important to note.</para>

+ <para>OpenGL's API, however, is still a C API. And C (at least, pre-C99) has no

+ <type>bool</type> type. Uploading a boolean value to a shader looks like

+ <programlisting language="cpp">glUniform1i(g_FragWhiteDiffuseColor.bUseRSquareUnif, g_bUseRSquare ? 1 : 0);</programlisting>

+ <para>The integer form of uniform uploading is used, but the floating-point form could

+ be allowed as well. The number 0 represents false, and any other number is

+ <title>Functions in GLSL</title>

+ <para>For the first time, we have a shader complex enough that splitting it into

+ different functions makes sense. So we do that. The first function is one that

+ computes the camera-space position:</para>

+ <title>Window to Camera Space Function</title>

+ <programlisting language="glsl">vec3 CalcCameraSpacePosition()

+ ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;

+ ndcPos.z = 2.0 * (gl_FragCoord.z - depthRange.x - depthRange.y) / (depthRange.y - depthRange.x);

+ clipPos.w = 1.0f / gl_FragCoord.w;

+ clipPos.xyz = ndcPos.xyz * clipPos.w;

+ return vec3(clipToCameraMatrix * clipPos);

+ <para>Not unsurprisingly, GLSL functions are defined much like C and C++

+ <para>The first three lines compute the position in normalized device coordinates.

+ Notice that the computation of the X and Y coordinates is simplified from the

+ original function. This is because our viewport always sets the lower-left position

+ of the viewport to (0, 0). This is what you get when you plug zeros into that

+ <para>From there, the clip-space position is computed as previously shown. Then the

+ result is multiplied through the clip-to-camera matrix, and that vector is returned

+ <para>This is a simple function that uses only uniforms to compute a value. It takes no

+ inputs and outputs. The second function is not quite as simple.</para>

+ <title>Light Intensity Application Function</title>

+ <programlisting language="glsl">vec4 ApplyLightIntensity(in vec3 cameraSpacePosition, out vec3 lightDirection)

+ vec3 lightDifference = cameraSpaceLightPos - cameraSpacePosition;

+ float lightDistanceSqr = dot(lightDifference, lightDifference);

+ lightDirection = lightDifference * inversesqrt(lightDistanceSqr);

+ float distFactor = bUseRSquare ? lightDistanceSqr : sqrt(lightDistanceSqr);

+ return lightIntensity * (1 / ( 1.0 + lightAttenuation * distFactor));

+ <para>The function header looks rather different from the standard C/C++ function

+ definition syntax. Parameters to GLSL functions are designated as being inputs,

+ outputs, or inputs and outputs.</para>

+ <para>Parameters designated with <literal>in</literal> are input parameters. Functions

+ can change these values, but they will have no effect on the variable or expression

+ used in the function call. So any changes . This is much like the default in C/C++,

+ where parameter changes are local. Naturally, this is the default with GLSL

+ parameters if you do not specify a qualifier.</para>

+ <para>Parameters designated with <literal>out</literal> can be written to, and its value

+ will be returned to the calling function. These are similar to non-const reference

+ parameter types in C++. And just as with reference parameters, the caller of a

+ function must call it with a real variable (called an <quote>l-value</quote>). And

+ this variable must be a variable that can be <emphasis>changed</emphasis>, so you

+ cannot pass a uniform or shader stage input value as this parameter.</para>

+ <para>However, the initial value of parameters declared as outputs is

+ <emphasis>not</emphasis> initialized from the calling function. This means that

+ the initial value is uninitialized and therefore undefined (ie: it could be

+ anything). Because of this, you can pass shader stage outputs as

+ <literal>out</literal> parameters. Shader stage output variables can be written

+ to, but <emphasis>never</emphasis> read from.</para>

+ <para>Parameters designated as <literal>inout</literal> will have its value initialized

+ by the caller and have the final value returned to the caller. These are exactly

+ like non-const reference parameters in C++. The main difference is that the value is

+ initialized with the one that the user passed in, which forbids the passing of

+ shader stage outputs as <literal>inout</literal> parameters.</para>

+ <para>This function is semi-complex, as an optimization. Previously, our functions

+ simply normalized the difference between the vertex position and the light position.

+ In computing the attenuation, we need the distance between the two. And the process

+ of normalization computes the distance. So instead of calling the GLSL function to

+ normalize the direction, we do it ourselves, so that the distance is not computed

+ twice (once in the GLSL function and once for us).</para>

+ <para>The second line performs a dot product with the same vector. Remember that the dot

+ product between two vectors is the cosine of the angle between them, multiplied by

+ each of the lengths of the vectors. Well, the angle between a vector and itself is

+ zero, and the cosine of zero is always one. So what you get is just the length of

+ the two vectors times one another. And since the vectors are the same, the lengths

+ are the same. Thus, the dot product of a vector with itself is the square of its

+ <para>To normalize a vector, we must divide the vector by it's length. And the length of

+ <varname>lightDifference</varname> is the square root of

+ <varname>lightDistanceSqr</varname>. The <function>inversesqrt</function>

+ computes 1 / the square root of the given value, so all we need to do is multiply

+ this with the <varname>lightDifference</varname> to get the light direction as a

+ normalized vector. This value is written to our output variable.</para>

+ <para>The next line computes our lighting term. Notice the use of the ?: operator. This

+ works just like in C/C++. If we are using the square of the distance, that's what we

+ store. Otherwise we get the square-root and store that.</para>

+ <para>The assumption in using ?: here is that only one or the other of the two

+ expressions will be evaluated. That's why the expensive call to

+ <function>sqrt</function> is done here. However, this may not be the case.

+ It is entirely possible (and quite likely) that the shader will always evaluate

+ <emphasis>both</emphasis> expressions and simply store one value or the

+ other as needed. So do not rely on such conditional logic to save

+ <para>After that, things proceed as expected.</para>

+ <para>Making these separate functions makes the main function look almost identical to

+ <title>Main Light Attenuation</title>

+ <programlisting language="glsl">void main()

+ vec3 cameraSpacePosition = CalcCameraSpacePosition();

+ vec3 lightDir = vec3(0.0);

+ vec4 attenIntensity = ApplyLightIntensity(cameraSpacePosition, lightDir);

+ float cosAngIncidence = dot(normalize(vertexNormal), lightDir);

+ cosAngIncidence = clamp(cosAngIncidence, 0, 1);

+ outputColor = (diffuseColor * attenIntensity * cosAngIncidence) +

+ (diffuseColor * ambientIntensity);

+ <para>Function calls appear very similar to C/C++, with the exceptions about parameters

+ noted before. The camera-space position is determined. Then the light intensity,

+ modified by attenuation, is computed. From there, things proceed as before.</para>

+ <title>Alternative Attenuation</title>

+ <para>As nice as these somewhat-realistic attenuation schemes are, it is often

+ useful to model light attenuation in a very different way. This is in no way

+ physically accurate, but it can look reasonably good.</para>

+ <para>We simply do linear interpolation based on the distance. When the distance is

+ 0, the light has full intensity. When the distance is beyond a given distance,

+ the maximum light range (which varies per-light), the intensity is 1.</para>

+ <para>Note that <quote>reasonably good</quote> depends on your needs. The closer you

+ get in other way to providing physically accurate lighting, the closer you get

+ to photorealism, the less you can rely on less accurate phenomena. It does no

+ good to implement a complicated sub-surface scattering lighting model that

+ includes Fresnel factors and so forth, while simultaneously using a simple

+ interpolation lighting attenuation model.</para>

<?dbhtml filename="Tut09 In Review.html" ?>

+ <para>In this tutorial, you have learned the following:</para>

+ <para>Point lights are lights that have a position within the world, radiating light

+ equally in all directions. The light direction at a particular point on the

+ surface must be computed using the position at that point and the position of

+ <para>Attempting to perform per-vertex lighting computations with point lights leads

+ <para>Lighting can be computed per-fragment by passing the fragment's position in an

+ appropriate space.</para>

+ <para>Lighting can be computed in model space.</para>

+ <para>Point lights have a falloff with distance, called attenuation. Not performing

+ this can cause odd effects, where a light appears to be brighter when it moves

+ farther from a surface. Light attenuation varies with the inverse of the square

+ of the distance, but other attenuation models can be used.</para>

+ <para>Fragment shaders can compute the camera space position of the fragment in

+ question by using <varname>gl_FragCoord</varname> and a few uniform variables

+ holding information about the camera to window space transform.</para>

+ <para>GLSL can have integer vectors, boolean values, and functions.</para>

<title>Further Study</title>

<para>Try doing these things with the given programs.</para>

the same stack, so long as the pop function puts the two matrices in the

+ <para>Implement the alternative attenuation described at the end of the section

+ <title>GLSL Functions of Note</title>

+ <funcdef>vec <function>inversesqrt</function></funcdef>

+ <paramdef>vec <parameter>x</parameter></paramdef>

+ <para>This function computes 1 / the square root of <varname>x</varname>. This is a

+ component-wise computation, so vectors may be used. The return value will have the

+ same type as <varname>x</varname>.</para>

+ <funcdef>vec <function>sqrt</function></funcdef>

+ <paramdef>vec <parameter>x</parameter></paramdef>

+ <para>This function computes the square root of <varname>x</varname>. This is a

+ component-wise computation, so vectors may be used. The return value will have the

+ same type as <varname>x</varname>.</para>

<?dbhtml filename="Tut09 Glossary.html" ?>

- <glossterm>point light</glossterm>

+ <glossterm>point light source</glossterm>

+ <para>A light source that emits light from a particular location in the world.

+ The light is emitted in all directions evenly.</para>

<glossterm>fragment lighting</glossterm>

+ <para>Evaluating the lighting equation at every fragment.</para>

+ <para>This is also called Phong shading, in contrast with Goroud shading, but

+ this name has fallen out of favor due to similarities with names for other

+ lighting models.</para>

+ <glossterm>light attenuation</glossterm>

+ <para>The decrease of the intensity of light with distance from the source of