/*
* Copyright 2010 Cliff L. Biffle. All Rights Reserved.
* Use of this source code is governed by the Apache License 2.0,
* which can be found in the LICENSE file.
*/
/*
* Kinect Projection Routines
*
* Since the Kinect is, fundamentally, a camera, its data has already
* undergone perspective projection. Closer objects seem relatively
* larger, and distant objects seem small. If we left the POV fixed,
* we could simply display the Kinect's raw data using an orthographic
* projection -- but what's the fun in that?
*
* This shader maps from one perspective space (the Kinect's) into
* another (GL's modelview/projection space). We receive unprocessed
* vertices from the CPU, which are structured like this:
* X and Y: -1.0 .. 1.0, normalized pixel location in the input.
* Z: the Kinect's raw 11-bit non-linear depth sample.
*
* (Note: you might expect the Kinect's raw depth samples to measure
* distance from the Kinect -- i.e. the length of the ray cast from
* the lens to the object. Not so! The Kinect pre-processes depths,
* and by the time we receive them, they are the distance from the
* object to the nearest point on the camera's XY plane.)
*
* Our job is to:
* 1. Linearize and scale the depth. In this case, we convert it
* such that 1 GL Unit = 1 meter. Because depths are already
* measured from the XY plane, we don't have to do any raycasting.
* 2. Reverse the perspective warping of the XY data, using what we
* know about the Kinect's optics.
*/
// The Kinect's lens appears to have a 57-degree horizontal FOV.
const float kinectFovXDegrees = 57.0;
// Multiplying this by our normalized XY gives us ray angles.
const float halfFov = (kinectFovXDegrees / 180.0 * 3.14159265358) / 2.0;
/*
* Reverses the Kinect's perspective projection. The input point must
* be in our "raw" format, that is:
* X and Y: -1.0 .. 1.0, normalized pixel location in the input.
* Z: the Kinect's raw 11-bit non-linear depth sample.
*
* The result is the point's original location in orthonormal space,
* with 1 GL unit per meter.
*/
vec3 kinect_unproject(vec3 point) {
// Linearization equation derived by the ROS folks at CCNY.
float linearZ = -325.616 / (point.z + -1084.61);
vec2 unprojected = linearZ * tan(halfFov) * point.xy;
return vec3(unprojected, linearZ);
}
/*
* In the incoming data, normals have been computed in Kinect-space.
* This function converts them into GL space. I am not spectacularly
* happy with the approach and may try again later.
*
* Note that the result may not, itself, be normalized.
*/
vec3 kinect_unproject_normal(vec3 normal, vec3 cameraOrigin, vec3 glOrigin) {
return kinect_unproject(normal + cameraOrigin) - glOrigin;
}