\small University of Minho - Portugal \\ }
-Bruno Gustavo Costa \\ \small pg17778@alunos.uminho.pt
+Bruno G. Costa \\ \small pg17778@alunos.uminho.pt
Nuno A. Silva\\ \small pg17455@alunos.uminho.pt
%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%
Lightcuts are a scalable approach to illumination computation. It's a new set of algorithms, which works by approximating illumination with many point lights, in an attempt to reduce computational cost.\\
Recurring to a binary tree and perceptual metrics, it partitions lights into groups in order to control the approximation error versus computational cost. It can use non diffuse materials and any geometry that can be ray traced.
%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Related Work}
%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%% ***!! VER AUTORES !!*** %%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A lot of work has been done in order to achieve better results with a lower computational time in illumination, although these are focused on individual lights. In order to deal with a larger amount of lights some techniques where developed.\\
-%resumo do WARD, G. 1994. Adaptive shadow testing for ray tracing. In Photorealis- tic Rendering in Computer Graphics (Proceedings of the Second Euro- graphics Workshop on Rendering), Springer-Verlag, New York, 11–20.
-Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique.% REVER MELHOR PARA FAZER + SENTIDO
-This method is not based in testing sources for probability of visibility, but instead uses the probability of untested sources to estimate a contribution, thus allowing for smooth shading and no apparent compromise in accuracy.\\
+Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method is not based in testing sources for probability of visibility, but instead uses the probability of untested sources to estimate a contribution, thus allowing for smooth shading and no apparent compromise in accuracy.\\
When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation.\\
Ward's algorithm avoids stochastic sampling, therefore reducing noise, in order to create a more pleasing and fast result.\\
-%Este paper ta na pasta de downloads.
-Paquette presents an hierarchical approximation, with trees of similar lights. Provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\
+Paquette \cite{PPD98} presents an hierarchical approximation, with the creation of an octree of point lights in a scene. Each node is a virtual light source. Bounds are determined when shading a point, thus guiding a hierarchical shading algorithm. If the result is satisfactory, no further descent in the tree is needed, thus lowering the costs needed, else it goes down the tree until the required results are found.\\
+Provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\
-Agarwal et al. along with Kollig and Keller managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result.\\
+Agarwal et al.\cite{Agarwal03structuredimportance}
+%along with Kollig and Keller
+ managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result.\\
+This approach is based in stratifying and sampling of an environment map, thus allowing for pre-integration of the illumination within each stratum to eliminate the noise. This is done at the cost of additional bias. This reduces the number of samples in one or two orders of magnitude for an image with the same quality.\\
-Keller uses Instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution.
+Keller \cite{Keller97instantradiosity} uses Instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution.
Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays.\\
-Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures.\\
+Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures, like coincident or intersecting polygons.\\
The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. These approximations leads to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\
\includegraphics[width=0.90\textwidth]{light_tree.PNG}
\parbox{0.75\textwidth}{\caption{A light tree. The leafs are single lights the other nodes are the clusters. }}
A (horizontal) cut in that tree defines a partition of the lights into clusters. That cut is a set such that every path from the root to a leaf will contain exactly one node from the cut. The more nodes the cut contains the higher quality the illumination approximation will have despite requiring more computation time. \\
\includegraphics{light_cut.PNG}
\parbox{0.70\textwidth}{\caption{A light cut and the error they produce. The colored regions represent areas where the error is low. }}
Those were Area Lights, HDR (High Dynamic Range) and Indirect Illumination.\\
+ \item \textbf{Area Lights}
Unlike point lights, area lights are difficult to calculate. The most common solution is to use many point lights to approximate the area light contribution. This number varies depending on the configuration; shading points near area lights will require more point lights, as opposed to those away, which would require less. Lightcuts create multiple light points and the system chooses automatically and adaptively the number of samples to use.
HDR relies on the creation of multiple directional lights in the environment map. Computing accurate illumination can be quite expensive. Since few lights create artifacts and many lights increase the needed resources, through lightcuts, this method is more efficient.
- \item Indirect Illumination
+ \item \textbf{Indirect Illumination}
Although quite realistic and high quality results, indirect illumination is quite expensive. In order to reduce costs yet keep the results free of artifacts. Instant Radiosity was an approach implemented in order to reduce costs, which can benefit from lightcuts. It works by creating virtual point lights at location where the light particles would scatter. Previous attempts were limited to tens or hundreds of virtual point lights, but through lightcuts, it's possible to use thousands or millions of virtual lights. In order to reduce noise artifacts the error ratio parameter for the lightcut selection must be half of the one used.
The image cost is mostly correlated with the amount of occlusion in the scene, the dominant cost comes from shadow rays - 50\%, computing the error bounds accounts for 20\% and 10\% is shading, the remaining time is used on various smaller operations. The scalability is superior (sub linear) to other algorithms such as [Ward 1994], like shown in figure \ref{fig:scalability}, much because of the use of cheap bounds on the maximum contribution (and error) from a cluster. \\
\includegraphics{scalability.PNG}
\parbox{0.70\textwidth}{\caption{Lightcut performance scales fundamentally better (i.e. sublinearly) as the number of point lights increase.}}
%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%
+Lightcuts are a new scalable approach to illumination. Through this algorithm for illumination computation the cost is greatly reduced, and is not limited to point lights only; it can be used in area lights, HDR environment maps and indirect illumination.\\
+The cost is reduced by approximation of the contribution of a group of lights. A cluster of lights is represented by a light whose emitted radiance is the sum of the contribution of the other lights in the cluster.\\These clusters are implemented in light trees, to reduce costs of finding new clusters. The trees are binary trees, in which a leaf is an individual light, and the interior nodes are light clusters.\\
%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{thebibliography}{}
%\bibliographystyle{plain}
-\bibliographystyle{abbrv}
+%%%%%%%%%%%%%%%%%%%%%%%%%%
+%%%%%%%%%%%%%%%%%%%%%%%%%%
+\bibliographystyle{plain}