Synthesis on Lightcuts / lightcuts.tex



\usepackage{aeguill}  % usefull for pdflatex



\title{ Synthesis on \\Lightcuts: A Scalable Approach to Illumination \\ 
	\small MSc in Informatics Engineering \\
	\small University of Minho - Portugal \\ }

Bruno G. Costa \\ \small
Nuno A. Silva\\ \small





Lightcuts are a scalable approach to illumination computation. It's  a new set of algorithms, which works by approximating illumination with many point lights, in an attempt to reduce computational cost.\\

Recurring to a binary tree and perceptual metrics, it partitions lights into groups in order to control the approximation error versus computational cost. It can use non diffuse materials and any geometry that can be ray traced.
The unification of different light sources into lightcuts has benefits, and uses other light sources to minimize errors.\\

It is also demonstrated reconstruction cuts, a technique to further reduce computational costs by sparsely calculating illumination over the image and then interpolate it whilst preserving high frequency details (shadow boundaries and glossy highlights). \\

This paper provides a synthesis on the original one done by Walter et al\cite{Walter:2005:LSA}. In the next section we make an overview on the related work. Then the lightcuts framework is explained followed by its results. Finally we draw some conclusions and opinions about the proposed framework. \\

\subsection{Related Work}

A lot of work has been done in order to achieve better results with a lower computational time in illumination. Most of the techniques are focused on individual lights and scale linearly with the number of lights. In order to deal with a larger amount of lights other techniques where developed.\\

Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method  tests the probability of untested sources to estimate a contribution. When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation.\\

Paquette \cite{PPD98} presents an hierarchical approximation, with the creation of an octree of point lights in a scene. Each node is a virtual light source. Error bounds are determined when shading a point, thus guiding an hierarchical shading algorithm. If the result is satisfactory, no further descent in the tree is needed, thus lowering the costs needed, else it goes down the tree until the required results are found. This tehcnique provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\

Agarwal et al.\cite{Agarwal03structuredimportance} managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result. This is a deterministic method,and this approach is based in stratifying and sampling of an environment map. Each stratum is then converted to a directional light at its center.
Keller \cite{Keller97instantradiosity} uses instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution. Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays.\\

Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures, like coincident or intersecting polygons. \\

Another way of reducing the computational costs is by interpolating illumination, in this paper it is presented a novel technique that integrates with the lightcuts framework they propose. \\

\section{The Lightcuts Approach}
Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximating the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light, the cluster reuses its material, geometric and visibility terms, the emitted radiance is now a sum of the contribution of the other lights in that cluster. \\

The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. These approximations lead to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\

	\parbox{0.75\textwidth}{\caption{A light tree. The leafs are single lights the other nodes are the clusters. }}

Another challenge that the authors dealt with is the fact that no single cluster of lights would work well (maintain a low error) over the entire image. Dynamically finding a new cluster could easily prove prohibitively expensive, so it was implemented a light tree to rapidly compute locally adaptive cluster partitions. A light tree is a binary tree where the leaves are individual lights and the interior nodes are light clusters containing other clusters (or eventually individual lights) below them in the tree. Figure 1 illustrates a simple light tree. \\

A (horizontal) cut in that tree defines a partition of the lights into clusters. That cut is a set such that every path from the root to a leaf will contain exactly one node from the cut. The more nodes the cut contains the higher quality the illumination approximation will have despite requiring more computation time. \\

	\parbox{0.70\textwidth}{\caption{A light cut and the error they produce. The colored regions represent areas where the error is low. }}

Because the cuts can vary from point to point, visual artifacts can occur. Figure \ref{fig:lightCut} depicts some cuts and the visual error produced. A lightcut is chosen when the relative error for that cut is below a threshold. The program starts with a coarse cut and then progressively refines it until the error threshold is reached. For each node in the cut both its cluster contribution and an upper bound on its error are estimated. An additional stopping criteria was made, a maximum cut size, so that the algorithm would stop, eventually. \\

The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spatial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. This is further explained in section \ref{metrics}. \\

The reconstruction cuts technique attempts to further reduce computational costs when going down the light tree (to reduce the amount of error). When calculating the lightcut for a shading point, given a set of nearby samples (locations where lightcuts have already been calculated), if all of them agree that a node is occluded, it is discarded. If a node's illumination is very similar across those samples, then that node is cheaply interpolated using impostor lights. Otherwise, if the nodes disaggre on the visibility, a shadow ray is cast and the normal lightcuts algorithm goes on. Figure \ref{fig:ReconCut} provides an example of this method. \\

	\parbox{0.60\textwidth}{\caption{Exemplification of reconstruction cuts. Sample 1 and 2 are the already calculated lighcuts of nearby shading points. }}

There are a few exceptions, for example no interpolation is allowed inside glossy highlights as it could lead to visible artifacts. Interpolating or discarding nodes, especially if high up in the tree, provides great cost savings. By exploiting spatial coherence, reconstruction cuts can shade points using far fewer shadow rays than lightcuts, this allows the generation of much higher quality images, with anti-aliasing, at a much lower cost than with lightcuts alone. \\

\section{Metric Systems} \label{metrics}
The authors rely on two main metrics to build the light tree and to define a cut in that tree. The light tree is built in order to maximize the quality of the clusters it defines. For that, the lights with greatest similarity, based on proximity and orientation, are grouped together. In fact there is a tree for each type of light, omni, point and directional. \\

Each cluster records its two children, its representative light, its total intensity, an axis aligned bounding box, and an orientation bounding cone. A cluster's total intensity is a function of the diagonal length of its bounding box and the half-angle of its bounding cone. The probability of a light being the representative for a cluster is proportional to its intensity. Each tree is built using a greedy, bottom-up approach by progressively combining pairs of lights and/or clusters.  \\

The process of defining a cut in the tree for each shading point that provides the greatest quality is more complicated. An upper bound to the cluster error is calculated by computing upper bounds of the material, geometric and visibility terms and multiplying them. The material term is equal to the BRDF function times the cosine of the angle between the light's direction and the surface normal. The BRDF function must also be bounded, which is simple with diffuse surfaces, with specular surfaces the authors propose some techniques and refer that in principle, lightcuts can work with any BRDF, as long as there is a good method to bound its maximum value over a cluster. The geometric term is a function of the light position and the angle between an oriented light's direction of maximum emission and direction to the point being shaded. The visibility term can be 1 if the light is visible, 0 if not or a fractional number in the case of semitransparent surfaces. \\

\section{More Illumination Applications}

After a study of the results for point lights, some more tests were conducted, but this time for other light types.
Those were Area Lights, HDR (High Dynamic Range) and Indirect Illumination.\\

	\item \textbf{Area Lights}
		Unlike point lights, area lights are difficult to calculate. The most common solution is to use many point lights to approximate the area light contribution. This number varies depending on the configuration; shading points near area lights will require more point lights, as opposed to those away, which would require less. Lightcuts create multiple light points and the system chooses automatically and adaptively the number of samples to use.

	\item \textbf{HDR}
		HDR relies on the creation of multiple directional lights in the environment map. Computing accurate illumination can be quite expensive. Since few lights create artifacts and many lights increase the needed resources, through lightcuts, this method is more efficient.

	\item \textbf{Indirect Illumination}
		Although quite realistic and high quality results, indirect illumination is quite expensive. In order to reduce costs yet keep the results free of artifacts. Instant Radiosity was an approach implemented in order to reduce costs, which can benefit from lightcuts. It works by creating virtual point lights at location where the light particles would scatter. Previous attempts were limited to tens or hundreds of virtual point lights, but through lightcuts, it's possible to use thousands or millions of virtual lights. In order to reduce noise artifacts the error ratio parameter for the lightcut selection must be half of the one used. 

\section{Lightcut Results}
The lightcuts implementation was tested in 5 scenes. All results use a 2\% error threshold and a maximum cut size of 1000 nodes. All images in this section have a resolution of 640x480 with one eye ray per pixel. \\

In all tests there are no visual differences compared to the tests considering all lights (no clustering). The first scene (figure \ref{fig:kitchen}) contain several area lights that are approximated by a few point lights, in the second (figure \ref{fig:tableu_hdr}), the HDR environment map is approximated by a few thousand directional lights. The other scenes mix illumination sources and have a more complex geometry, the reconstruction cuts technique was used to reduce the aliasing without increasing the computation time too much. Reconstruction cuts allows to generate higher quality images, with anti-aliasing, at much lower cost than with lightcuts alone.  \\

 \parbox{0.9\textwidth}{\caption{Kitchen scene with direct light from 72 area sources. }}
 \parbox{0.9\textwidth}{\caption{Tableau scene illuminated by an HDR environment map. In parentheses are averages over only pixels containing geometry.}}

The image cost is mostly correlated with the amount of occlusion in the scene, the dominant cost comes from shadow rays - 50\%, computing the error bounds accounts for 20\% and 10\% is shading, the remaining time is used on various smaller operations. The scalability is superior (sub linear) to other algorithms such as [Ward 1994], like shown in figure \ref{fig:scalability}, much because of the use of cheap bounds on the maximum contribution (and error) from a cluster.  \\

	\parbox{0.70\textwidth}{\caption{Lightcut performance scales fundamentally better (i.e. sublinearly) as the number of point lights increase.}}

Lightcuts are a new scalable approach to illumination that allows to render large amounts of point lights quickly, by adaptively selecting a different subset of the available lights at each shading point according to perceptual metrics. This means that it's possible for an artist to quickly have a grasp of how results would look like, instead of waiting for the complete rendering. Through this algorithm the computational costs grow sublinearly with the number of lights, and is not limited to point lights only: it can be used in area lights, HDR environment maps and indirect illumination. \\

The cost is reduced by approximating the contribution of a group of lights. A cluster of lights is replaced by a light whose emitted radiance is the sum of the contribution of the lights in that cluster. This can greatly reduce the number of shadow rays required to shade a point. The cluster is built in order to minimize the error it produces, i.e. lights with similar orientation and proximity will most likely be grouped together into a single light.\\

When shading a point, several clusters of lights are selected, this is defined as a cut in the light tree. A cut higher in the tree will have less clusters and so would produce more visual error but require less computational time. Area lights, HDR environment maps and indirect illumination are approximated by many point lights and then the normal lightcuts approach is followed. \\

Often in the interior of a surface, shading points and its neighbors have very similar illumination, this is particularly true for diffuse surfaces. The reconstruction cuts technique exploits this coherence in the illumination and attempts to interpolate it across the shading points. The authors state that only highly glossy surfaces can't use this technique. \\

Although the framework presented here shows great results there are not many implementations of it, at least in the open source community. Instead, many implementations are said to be ``inspired'' by the lightcuts approach as they mainly use clusters of lights and a few other details like, for example a GPU implementation\cite{KiH08LP}. There is but one notorious lightcuts implementation integrated into Blender\cite{lightcutsBlender}, and it was developed as part of the Google Summer of Code 2008\cite{lightcutsgoogle}. On the other hand much investigation has been done on top of lighcuts framework, such as Multidimensional lightcuts\cite{Walter:2006:ML} and Single-pass Scalable Subsurface Rendering\cite{Arbree2008}. \\