# HG changeset patch
# User Alfonse
# Date 1329888087 28800
# Node ID e9263b938503859583a6a924fb0a021df3191fe3
# Parent 39e880e6cf53ecf61b21d1cfb68486c8de985bd8
Tut17: Finished Projected Texture text.
diff git a/Documents/Texturing/Tutorial 17.xml b/Documents/Texturing/Tutorial 17.xml
 a/Documents/Texturing/Tutorial 17.xml
+++ b/Documents/Texturing/Tutorial 17.xml
@@ 177,7 +177,8 @@
Specifically in this case, the number of lights is a uniform, not a uniform block.
To do this, we need to use a uniform state binder,
g_lightNumBinder, and set it into all of the nodes in the
 scene. Also, we need references to those nodes to do rotation animations.
+ scene. This binder allows us to set the uniform for all of the objects (regardless
+ of which program they use).
The p_unlit shader is never actually used in the scene graph;
we just use the scene graph as a convenient way to load the shader. Similarly, the
m_sphere mesh is not used in a scene graph node. We pull
@@ 195,7 +196,8 @@
loaded and an error message is displayed.
Two of the objects in the scene rotate. This is easily handled using our list of
 objects. In the display method:
+ objects. In the display method, we access certain nodes and
+ change their transforms them:
g_nodes[0].NodeSetOrient(glm::rotate(glm::fquat(),
360.0f * g_timer.GetAlpha(), glm::vec3(0.0f, 1.0f, 0.0f)));
@@ 209,11 +211,11 @@
Multiple Scenes
The splitscreen trick used here is actually quite simple to pull off. It's also
one of the advantages of the scene graph: the ability to easily rerender the same
 scene from a different perspective.
+ scene multiple times.
The first thing that must change is that the projection matrix cannot be set in
 the old reshape function. It simply sets the width and height
 of the screen into global variables. This is important because we will be using two
 projection matrices.
+ the old reshape function. That function now only sets the new
+ width and height of the screen into global variables. This is important because we
+ will be using two projection matrices.
The projection matrix used for the left scene is set up like this:
Left Projection Matrix
@@ 288,27 +290,22 @@
we must prove that:
This might look like a simple proof by inspection due to the associative nature of
 these, but it is not. Vector math and matrix math simply don't combine that way.
 However, it does provide a clue to the real proof.
 This last part is really where much of the confusion dissipates. See, W (the W
 component of the postprojection vertex) is not the same as W' on the other side of
 the equation. W' is the W component of V after the
 transformation by T.
 So let us look at things symbolically. For the sake of simplicity, let's look at
 just the X component of the output. The X component of the left side of the equation
 is as follows:

 The right side looks like this:

 This tells us that the above equation is true if, and only if, the bottom row of T
 is the vector (0, 0, 0, 1). If it is anything else, then we get different numbers.
 Fortunately, the only matrix we have that has a different bottom row is the
 projection matrix, and T is the rotation matrix we apply after translation.
+ these, but it is not. The reason is quite simple: W and W' may not be the same. W is
+ the fourth component of v; W' is the fourth component of what results from T*v. If T
+ changes W, then the equation is not true. But at the same time, if T doesn't change
+ W, if W == W', then the equation is true.
+ Well, that makes things quite simple. We simply need to ensure that our T does not
+ alter W. Matrix multiplication tells us that W' is the dot product of V and the
+ bottom row of T.
+
+ Therefore, if the bottom row of T is (0, 0, 0, 1), then W == W'. And therefore, we
+ can use T before the division. Fortunately, the only matrix we have that has a
+ different bottom row is the projection matrix, and T is the rotation matrix we apply
+ after projection.
So this works, as long as we use the right matrices. We can rotate, translate, and
 scale postprojective clipspace exactly as we would postprojective NDC
 space.
+ scale postprojective clipspace exactly as we would postprojective NDC space.
+ Which is good, because we get to preserve the W component for perspectivecorrect
+ interpolation.
The takehome lesson here is very simple: projections are not that special as far
as transforms are concerned. Postprojective space is mostly just another space. It
may be a 4dimensional homogeneous coordinate system, and that may be an odd thing
@@ 319,9 +316,191 @@
Projective Texture



+ In order to create our flashlight effect, we need to do something called
+ projective texturing. Projective texturing is a special form
+ of texture mapping. It is a way of generating texture coordinates for a texture, such
+ that it appears that the texture is being projected onto a scene, in much the same way
+ that a film projector projects light. Therefore, we need to do two things: implement
+ projective texturing, and then use the value we sample from the projected texture as the
+ light intensity.
+ The key to understanding projected texturing is to think backwards, compared to the
+ visual effect we are trying to achieve. We want to take a 2D texture and make it look
+ like it is projected onto the scene. To do this, we therefore do the opposite: we
+ project the scene onto the 2D texture. We want to take the vertex
+ positions of every object in the scene and project them into the space of the
+ texture.
+ Since this is a perspective projection operation, and it involves transforming vertex
+ positions, naturally we need a matrix. This is math we already know: we have vertex
+ positions in model space. We transform them to a camera space, one that is different
+ from the one we use to view the scene. Then we use a perspective projection matrix to
+ transform them to clipspace; both the matrix and this clipspace are again different
+ spaces from what we use to render the scene. Once perspective divide later, and we're
+ done.
+ That last part is the small stumbling block. See, after the perspective divide, the
+ visible world, the part of the world that is projected onto the texture, lives in a [1,
+ 1] sized cube. That is the size of NDC space, though it is again a different NDC space
+ from the one we use to render. The problem is that the range of the texture coordinates,
+ the space of the 2D texture itself, is [0, 1].
+ This is why we needed the prior discussion of postprojective transforms. Because we
+ need to do a postprojective transform here: we have to transform the XY coordinates of
+ the projected position from [1, 1] to [0, 1] space. And again, we do not want to have
+ to perform the perspective divide ourselves; OpenGL has special functions for texture
+ accesses with a divide. Therefore, we encode the translation and scale as a
+ postprojective transformation. As previously demonstrated, this is mathematically
+ identical to doing the transform after the division.
+ This is done in the Projected Light project. This
+ tutorial uses a similar scene to the one before, though with slightly different numbers
+ for lighting. The main difference, scene wise, is the addition of a textured background
+ box.
+
+ The camera controls work the same way as before. The projected flashlight, represented
+ by the red, green, and blue axes, is moved with the IJKL keyboard keys, with O and U
+ moving up and down, respectively. The right mouse button rotates the flashlight around;
+ the blue line points in the direction of the light. The flashlight's position and
+ orientation are built around the camera controls, so it rotates around a point in front
+ of the flashlight. It translates relative to its current facing as well. As usual,
+ holding down the Shift key will cause the flashlight to move more
+ slowly.
+ Pressing the G key will toggle all of the regular lighting on and
+ off. This makes it easier to see just the light from our projected texture.
+
+ Flashing the Light
+ Let us first look at how we achieve the projected texture effect. We want to take
+ the model space positions of the vertices and project them onto the texture.
+ However, there is one minor problem: the scene graph system provides a transform
+ from model space into the visible camera space. We need a transform to our special
+ projected texture camera space, which has a different position and
+ orientation.
+ We resolve this by being clever. We already have positions in the viewing camera
+ space. So we simply start there and construct a matrix from view camera space into
+ our texture camera space.
+
+ View Camera to Projected Texture Transform
+ glutil::MatrixStack lightProjStack;
+//Texturespace transform
+lightProjStack.Translate(0.5f, 0.5f, 0.0f);
+lightProjStack.Scale(0.5f, 0.5f, 1.0f);
+//Project. Zrange is irrelevant.
+lightProjStack.Perspective(g_lightFOVs[g_currFOVIndex], 1.0f, 1.0f, 100.0f);
+//Transform from main camera space to light camera space.
+lightProjStack.ApplyMatrix(lightView);
+lightProjStack.ApplyMatrix(glm::inverse(cameraMatrix));
+
+g_lightProjMatBinder.SetValue(lightProjStack.Top());
+
+ Reading the modifications to lightProjStack in bottomtotop
+ order, we begin by using the inverse of the view camera matrix. This transforms all
+ of our vertex positions back to world space, since the view camera matrix is a
+ worldtocamera matrix. We then apply the worldtotexturecamera matrix. This is
+ followed by a projection matrix, which uses an aspect ratio of 1.0. The last two
+ transforms move us from [1, 1] NDC space to the [0, 1] texture space.
+ We use a matrix uniform binder to associate that transformation matrix with all of
+ the objects in the scene. Our shader takes care of things in the obvious way:
+ lightProjPosition = cameraToLightProjMatrix * vec4(cameraSpacePosition, 1.0);
+ Note that this line is part of the vertex shader;
+ lightProjPosition is passed to the fragment shader. One might
+ think that the projection would work best in the fragment shader, but doing it
+ pervertex is actually just fine. The only time one would need to do the projection
+ perfragment would be if one was using imposters or was otherwise modifying the
+ depth of the fragment. Indeed, because it works pervertex, projected textures were
+ a preferred way of doing cheap lighting in many situations.
+ In the fragment shader, we want to use the projected texture as a light. We have
+ the ComputeLighting function in this shader from prior
+ tutorials. All we need to do is make our projected light appear to be a regular
+ light.
+ PerLight currLight;
+currLight.cameraSpaceLightPos = vec4(cameraSpaceProjLightPos, 1.0);
+currLight.lightIntensity =
+ textureProj(lightProjTex, lightProjPosition.xyw) * 4.0;
+
+currLight.lightIntensity = lightProjPosition.w > 0 ?
+ currLight.lightIntensity : vec4(0.0);
+ We create a simple structure that we fill in. Later, we pass this structure to
+ ComputeLighting, and it does the usual thing.
+ The view camera space position of the projected light is passed in as a uniform.
+ It is necessary for our flashlight to properly obey attenuation, as well as to find
+ the direction towards the light.
+ The next line is where we do the actual texture projection. The
+ textureProj is a texture accessing function that does
+ projective texturing. Even though lightProjTex is a
+ sampler2D (for 2D textures), the texture coordinate has three
+ dimensions. All forms of textureProj take one extra texture
+ coordinate compared to the regular texture function. This extra
+ texture coordinate is divided into the previous one before being used to access the
+ texture. Thus, it performs the perspective divide for us.
+
+ Mathematically, there is virtually no difference between using
+ textureProj and doing the divide ourselves and calling
+ texture with the results. While there may not be a
+ mathematical difference, there very well may be a performance difference. There
+ may be specialized hardware that does the division much faster than the
+ generalpurpose opcodes in the shader. Then again, there may not. However, using
+ textureProj will certainly be no slower than
+ texture in the general case, so it's still a good
+ idea.
+
+ Notice that the value pulled from the texture is scaled by 4.0. This is done
+ because the color values stored in the texture are clamped to the [0, 1] range. To
+ bring it up to our high dynamic range, we need to scale the intensity
+ appropriately.
+ The last statement is special. It compares the W component of the interpolated
+ position against zero, and sets the light intensity to zero if the W component is
+ less than or equal to 0. What is the purpose of this?
+ It stops this from happening:
+
+ The projection math doesn't care what side of the center of projection an object
+ is on; it will work either way. And since we do not actually do clipping on our
+ texture projection, we need some way to prevent this from happening. We effectively
+ need to do some form of clipping.
+ Recall that, given the standard projection transform, the W component is the
+ negation of the cameraspace Z. Since the camera in our camera space is looking down
+ the negative Z axis, all positions that are in front of the camera must have a W >
+ 0. Therefore, if W is less than or equal to 0, then the position is behind the
+ camera.
+
+
+ Spotlight Tricks
+ The size of the flashlight can be changed simply by modifying the field of view in
+ the texture projection matrix. Pressing the Y key will increase the
+ FOV, and pressing the N key will decrease it. An increase to the
+ FOV means that the light is projected over a greater area. At a large FOV, we
+ effectively have an entire hemisphere of light.
+ Another interesting trick we can play is to have multicolored lights. Press the
+ 2; this will change to a texture that contains spots of various
+ different colors.
+
+ This kind of complex light emitter would not be possible without using a texture.
+ Well it could be possible without textures, but it would require a lot more
+ processing power than a few matrix multiplies, a division in the fragment shader,
+ and a texture access. Press the 1 key to go back to the flashlight
+ texture.
+ There is one final issue that can and will crop up with projected textures: what
+ happens when the texture coordinates are outside of the [0, 1] boundary. With
+ previous textures, we used either GL_CLAMP_TO_EDGE or
+ GL_REPEAT for the S and T texture coordinate wrap modes.
+ Repeat is obviously not a good idea here; thus far, our sampler objects have been
+ clamping to the texture's edge. That worked fine because our edge texels have all
+ been zero. To see what happens when they are not, press the 3
+ key.
+
+ That rather ruins the effect. Fortunately, OpenGL does provide a way to resolve
+ this. It gives us a way to say that texels fetched outside of the [0, 1] range
+ should return a particular color. As before, this is set up with the sampler
+ object:
+
+ Border Clamp Sampler Objects
+ glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
+glSamplerParameteri(g_samplers[1], GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
+
+float color[4] = {0.0f, 0.0f, 0.0f, 1.0f};
+glSamplerParameterfv(g_samplers[1], GL_TEXTURE_BORDER_COLOR, color);
+
+ The S and T wrap modes are set to GL_CLAMP_TO_BORDER. Then the
+ border's color is set to zero. To toggle between the edge clamping sampler and the
+ border clamping one, press the H key.
+
+ That's much better now.
+
@@ 357,6 +536,20 @@
Try doing these things with the given programs.
+ In the spotlight project, change the projection texture coordinate from a
+ full 4D coordinate to a 2D. Do this by performing the dividebyW step
+ directly in the vertex shader, and simply pass the ST coordinates to the
+ fragment shader. Just use texture instead of
+ textureProj in the fragment shader. See how that
+ affects things. Also, try doing the perspective divide in the fragment
+ shader and see how this differs from doing it in the vertex shader.
+
+
+ In the spotlight project, change the interpolation style from
+ smooth to noperspective. See how
+ nonperspectivecorrect interpolation changes the projection.
+
+
Instead of using a projective texture, build a lighting system for spot
lights entirely within the shader. It should have a maximum angle; the
larger the angle, the wider the spotlight. It should also have an inner
@@ 405,6 +598,12 @@
+ projective texturing
+
+
+
+
+
diff git a/Tut 17 Spotlight on Textures/data/cubeLight.frag b/Tut 17 Spotlight on Textures/data/cubeLight.frag
 a/Tut 17 Spotlight on Textures/data/cubeLight.frag
+++ b/Tut 17 Spotlight on Textures/data/cubeLight.frag
@@ 83,10 +83,7 @@
accumLighting += ComputeLighting(diffuseColor, Lgt.lights[light]);
}
// accumLighting = vec4(0.0f);
accumLighting += ComputeLighting(diffuseColor, currLight);
outputColor = accumLighting / Lgt.maxIntensity;

// outputColor = currLight.lightIntensity;
}
diff git a/Tut 17 Spotlight on Textures/data/litTexture.frag b/Tut 17 Spotlight on Textures/data/litTexture.frag
 a/Tut 17 Spotlight on Textures/data/litTexture.frag
+++ b/Tut 17 Spotlight on Textures/data/litTexture.frag
@@ 73,6 +73,4 @@
}
outputColor = accumLighting / Lgt.maxIntensity;

// outputColor = diffuseColor;
}
diff git a/Tut 17 Spotlight on Textures/data/projLight.frag b/Tut 17 Spotlight on Textures/data/projLight.frag
 a/Tut 17 Spotlight on Textures/data/projLight.frag
+++ b/Tut 17 Spotlight on Textures/data/projLight.frag
@@ 73,9 +73,9 @@
PerLight currLight;
currLight.cameraSpaceLightPos = vec4(cameraSpaceProjLightPos, 1.0);
currLight.lightIntensity =
 textureProj(lightProjTex, lightProjPosition.xyw) * 4.0f;
+ textureProj(lightProjTex, lightProjPosition.xyw) * 4.0;
 currLight.lightIntensity = lightProjPosition.z > 2.0202 ?
+ currLight.lightIntensity = lightProjPosition.w > 0 ?
currLight.lightIntensity : vec4(0.0);
vec4 accumLighting = diffuseColor * Lgt.ambientIntensity;
@@ 84,10 +84,7 @@
accumLighting += ComputeLighting(diffuseColor, Lgt.lights[light]);
}
// accumLighting = vec4(0.0f);
accumLighting += ComputeLighting(diffuseColor, currLight);
outputColor = accumLighting / Lgt.maxIntensity;

// outputColor = currLight.lightIntensity;
}