b02e749
committed
Commits
Comments (0)
Files changed (1)

+11 12lightcuts.tex
lightcuts.tex
Lightcuts are a scalable approach to illumination computation. It's a new set of algorithms, which works by approximating illumination with many point lights, in an attempt to reduce computational cost.\\
\np A lot of work has been done in order to achieve better results with a lower computational time in illumination, although these are focused on individual lights. In order to deal with a larger amount of lights some techniques where developed.\\
+A lot of work has been done in order to achieve better results with a lower computational time in illumination, although these are focused on individual lights. In order to deal with a larger amount of lights some techniques where developed.\\
%resumo do WARD, G. 1994. Adaptive shadow testing for ray tracing. In Photorealis tic Rendering in Computer Graphics (Proceedings of the Second Euro graphics Workshop on Rendering), SpringerVerlag, New York, 11–20.
Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique.% REVER MELHOR PARA FAZER + SENTIDO
Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximating the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light and the cluster position is the same as the representative light. The emitted radiance is now a sum of the contribution of the other lights in that cluster. \\
+Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximating the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light, the cluster reuses its material, geometric and visibility terms, the emitted radiance is now a sum of the contribution of the other lights in that cluster. \\
The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. This approximation leads to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\
+The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. These approximations leads to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\
Because the cuts can vary from point to point, visual artifacts can occur. Figure \ref{fig:light_cut} depicts some cuts and the visual error produced. A lightcut is chosen when the relative error for that cut is below a threshold. The program starts with a coarse cut and then progressively refines it until the error threshold is reached. For each node in the cut both its cluster contribution and an upper bound on its error are estimated. An additional stopping criteria was made, a maximum cut size, so that the algorithm would stop, eventually. \\
The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spacial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. \\
+The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spacial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. This is further explained in section \ref{metrics}. \\
The reconstruction cuts technique attempts to further reduce computational costs when going down the light tree (to reduce the amount of error). Given a set of nearby samples (locations where lightcuts have already been calculated), if all of them agree that a node is occluded, it is discarded. If a node's illumination is very similar across those samples, then that node is cheaply interpolated using impostor lights. Otherwise the normal lightcuts algorithm goes on. There are a few exceptions, for example no interpolation is allowed inside glossy highlights as it could lead to visible artifacts. Interpolating or discarding nodes, especially if high up in the tree, provides great cost savings. By exploiting spatial coherence, reconstruction cuts can shade points using far fewer shadow rays than lightcuts, this allows the generation of much higher quality images, with antialiasing, at a much lower cost than with lightcuts alone. \\
The authors rely on two main metrics to build the light tree and to define a cut in that tree. The light tree is built by maximizing the quality of the clusters it defines. For that, the lights with greatest similarity, based on proximity and orientation, are combined into clusters. \\
+The authors rely on two main metrics to build the light tree and to define a cut in that tree. The light tree is built in order to maximize the quality of the clusters it defines. For that, the lights with greatest similarity, based on proximity and orientation, are grouped together. In fact there is a tree for each type of light, omni, point and directional. \\
+Each cluster records its two children, its representative light, its total intensity, an axis aligned bounding box, and an orientation bounding cone. A cluster's total intensity is a function of the diagonal length of its bounding box and the halfangle of its bounding cone. The probability of a light being the representative for a cluster is proportional to its intensity. Each tree is built using a greedy, bottomup approach by progressively combining pairs of lights and/or clusters. \\
+The process of defining a cut in the tree for each shading point that provides the greatest quality is more complicated. An upper bound to the cluster error is calculated by computing upper bounds of the material, geometric and visibility terms and multiplying them. The material term is equal to the BRDF function times the cosine of the angle between the light's direction and the surface normal. The BRDF function must also be bounded, which is simple with diffuse surfaces, with specular surfaces the authors propose some techniques and refer that in principle, lightcuts can work with any BRDF, as long as there is a good method to bound its maximum value over a cluster. The geometric term is a function of the light position and the angle between an oriented light's direction of maximum emission and direction to the point being shaded. The visibility term can be 1 if the light is visible, 0 if not or a fractional number in the case of semitransparent surfaces. \\
After a study of the results for point lights, some more tests were conducted, but this time for other light types.
The lightcuts implementation was tested in 5 scenes. All results use a 2\% error threshold and a maximum cut size of 1000 nodes. All images in this section have a resolution of 640x480 with one eye ray per pixel. \\
In all tests there are no visual differences compared to the tests considering all lights (no clustering). The first scene (figure \ref{fig:kitchen}) contain several area lights that are approximated by a few point lights, in the second (figure \ref{fig:tableu_hdr}), the HDR environment map is approximated by a few thousand directional lights. The other scenes mix illumination sources and have a more complex geometry, the reconstruction cuts technique was used to reduce the aliasing without increasing the computation time too much. Reconstruction cuts allows to generate higher quality images, with antialiasing, at much lower cost than with lightcuts alone. \\