# Commits

committed a18d87d

review the full text, and minor changes/corrections

• Participants
• Parent commits 7ef8056

# File lightcuts.tex

 \section{The Lightcuts Approach}
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 \np
-Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximation the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light, keeping its position but the emitted radiance is now a sum of the contribution of the other lights in that cluster. \\
+Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximating the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light and the cluster position is the same as the representative light. The emitted radiance is now a sum of the contribution of the other lights in that cluster. \\

-The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. This approximation leads to some error, which must be relatively low in order to produce an image with no visible artifacts. This challenge is to group lights in a way that the error is sufficiently low. \\
+The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. This approximation leads to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\

 \begin{figure}[h]
 	\centering
 		\includegraphics[width=0.90\textwidth]{light_tree.PNG}
-	\caption{A light tree. The leafs are single lights the other nodes are the clusters. }
+	\parbox{0.75\textwidth}{\caption{A light tree. The leafs are single lights the other nodes are the clusters. }}
 	\label{fig:light_tree}
 \end{figure}

-Another challenge that the authors dealt with is the fact that no single cluster of lights would work well (maintain a low error) over the entire image. Dynamically finding a new cluster could easily prove prohibitively expensive, so it was implemented a light tree to rapidly compute locally adaptive cluster partitions. A light tree is a binary tree where the leaves are individual lights and the interior nodes are light clusters containing other clusters (or eventually individual lights) below them in the tree. \\
+Another challenge that the authors dealt with is the fact that no single cluster of lights would work well (maintain a low error) over the entire image. Dynamically finding a new cluster could easily prove prohibitively expensive, so it was implemented a light tree to rapidly compute locally adaptive cluster partitions. A light tree is a binary tree where the leaves are individual lights and the interior nodes are light clusters containing other clusters (or eventually individual lights) below them in the tree. Figure \ref{fig:light_tree} ilustrates a simple light tree. \\

-A (horizontal) cut in that tree defines a partition of the lights into clusters. That cut is a set such that every path from the root to a leaf will contain exactly one node from the cut. The more nodes the cut contains the more quality the illumination approximation will have whilst requiring more computation time. \\
+A (horizontal) cut in that tree defines a partition of the lights into clusters. That cut is a set such that every path from the root to a leaf will contain exactly one node from the cut. The more nodes the cut contains the higher quality the illumination approximation will have despite requiring more computation time. \\

 \begin{figure}[h]
 	\centering
 		\includegraphics{light_cut.PNG}
-	\caption{A light cut and the error they produce. The colored regions represent areas where the error is low. }
+	\parbox{0.70\textwidth}{\caption{A light cut and the error they produce. The colored regions represent areas where the error is low. }}
 	\label{fig:light_cut}
 \end{figure}

-Because the cuts can vary from point to point, visual artifacts can occur. A lightcut is chosen when the relative error for that cut is below a threshold. The program starts with a coarse cut and then progressively refines it until the threshold is reached. For each node in the cut both its cluster contribution and an upper bound on its error are estimated. An additional stopping criteria was made, a maximum cut size, so that the algorithm would stop, eventually. \\
+Because the cuts can vary from point to point, visual artifacts can occur. Figure \ref{fig:light_cut} depicts some cuts and the visual error produced. A lightcut is chosen when the relative error for that cut is below a threshold. The program starts with a coarse cut and then progressively refines it until the error threshold is reached. For each node in the cut both its cluster contribution and an upper bound on its error are estimated. An additional stopping criteria was made, a maximum cut size, so that the algorithm would stop, eventually. \\


-The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spacial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. [maybe explain the equations?] \\
+The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spacial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. \\

-The reconstruction cuts attempts to further reduce computational costs when going down the light tree (to reduce the amount of error). Given a set of nearby samples (locations where lightcuts have already been calculated), if all of them agree that a node is occluded, it is discarded. If a node's illumination is very similar across those samples, then that node is cheaply interpolated using impostor lights. Otherwise the normal lightcuts algorithm goes on. There are a few exceptions, for example, no interpolation is allowed inside glossy highlights as it could lead to visible artifacts. Interpolating or discarding nodes, especially if high up in the tree, provides great cost savings. \\
+The reconstruction cuts technique attempts to further reduce computational costs when going down the light tree (to reduce the amount of error). Given a set of nearby samples (locations where lightcuts have already been calculated), if all of them agree that a node is occluded, it is discarded. If a node's illumination is very similar across those samples, then that node is cheaply interpolated using impostor lights. Otherwise the normal lightcuts algorithm goes on. There are a few exceptions, for example no interpolation is allowed inside glossy highlights as it could lead to visible artifacts. Interpolating or discarding nodes, especially if high up in the tree, provides great cost savings. By exploiting spatial coherence, reconstruction cuts can shade points using far fewer shadow rays than lightcuts, this allows the generation of much higher quality images, with anti-aliasing, at a much lower cost than with lightcuts alone. \\



 \section{Lightcut Results}
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 \np
-The lightcuts implementation was tested in 5 scenes. All results use a 2\% error ratio and a maximum cut size of 1000 nodes. All images in this section have a resolution of 640x480 with one eye ray per pixel. \\
+The lightcuts implementation was tested in 5 scenes. All results use a 2\% error threshold and a maximum cut size of 1000 nodes. All images in this section have a resolution of 640x480 with one eye ray per pixel. \\

 \begin{figure}[h]
 	\centering
 		\includegraphics[width=0.50\textwidth]{kitchen.PNG}
-	\caption{Kitchen scene with direct light from 72 area sources. }
+	\parbox{0.70\textwidth}{\caption{Kitchen scene with direct light from 72 area sources. }}
 	\label{fig:kitchen}
 \end{figure}

-In all tests there are no visual differences compared to the tests considering all lights (no clustering). The first scene (figure \ref{fig:kitchen}) contain several area lights that are approximated by a few point lights, in the second (figure \ref{fig:tableu_hdr}), the HDR environment map is approximated by a few thousand directional lights. The other scenes mix illumination sources and have a more complex geometry, in order to lower aliasing, the reconstruction cuts technique was used. Reconstruction cuts allows to generate higher quality images, with anti-aliasing, at much lower cost than with lightcuts alone.  \\
+In all tests there are no visual differences compared to the tests considering all lights (no clustering). The first scene (figure \ref{fig:kitchen}) contain several area lights that are approximated by a few point lights, in the second (figure \ref{fig:tableu_hdr}), the HDR environment map is approximated by a few thousand directional lights. The other scenes mix illumination sources and have a more complex geometry, the reconstruction cuts technique was used to reduce the aliasing without increasing the computation time too much. Reconstruction cuts allows to generate higher quality images, with anti-aliasing, at much lower cost than with lightcuts alone.  \\

 \begin{figure}[h]
 	\centering
 		\includegraphics[width=0.50\textwidth]{tableu_hdr.PNG}
-	\caption{Tableau scene illuminated by an HDR environment map. In parentheses are averages over only pixels containing geometry.}
+	\parbox{0.70\textwidth}{\caption{Tableau scene illuminated by an HDR environment map. In parentheses are averages over only pixels containing geometry.}}
 	\label{fig:tableu_hdr}
 \end{figure}

 \begin{figure}[h]
 	\centering
 		\includegraphics{scalability.PNG}
-	\caption{Lightcut performance scales fundamentally better (i.e. sublinearly) as the number of point lights increase.}
+	\parbox{0.70\textwidth}{\caption{Lightcut performance scales fundamentally better (i.e. sublinearly) as the number of point lights increase.}}
 	\label{fig:scalability}
 \end{figure}