Commits

Nuno Silva committed 03f0bd7

reviewed related work
added more to conclusions

Comments (0)

Files changed (1)

 \subsection{Related Work}
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 
-A lot of work has been done in order to achieve better results with a lower computational time in illumination, although these are focused on individual lights. In order to deal with a larger amount of lights some techniques where developed.\\
+A lot of work has been done in order to achieve better results with a lower computational time in illumination. Most of the techniques are focused on individual lights and scale linearly with the number of lights. In order to deal with a larger amount of lights other techniques where developed.\\
 
-Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method is not based in testing sources for probability of visibility, but instead uses the probability of untested sources to estimate a contribution, thus allowing for smooth shading and no apparent compromise in accuracy.\\
-When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation.\\
-Ward's algorithm avoids stochastic sampling, therefore reducing noise, in order to create a more pleasing and fast result.\\
+\textbf{[redigir melhor, tornar mais suscinto]}
+Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method is not based in testing sources for probability of visibility, but instead uses the probability of untested sources to estimate a contribution, thus allowing for smooth shading and no apparent compromise in accuracy. When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation. Ward's algorithm avoids stochastic sampling, therefore reducing noise, in order to create a more pleasing and fast result.\\
 
-Paquette \cite{PPD98} presents an hierarchical approximation, with the creation of an octree of point lights in a scene. Each node is a virtual light source. Bounds are determined when shading a point, thus guiding a hierarchical shading algorithm. If the result is satisfactory, no further descent in the tree is needed, thus lowering the costs needed, else it goes down the tree until the required results are found.\\
-Provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\
+Paquette \cite{PPD98} presents an hierarchical approximation, with the creation of an octree of point lights in a scene. Each node is a virtual light source. Error bounds are determined when shading a point, thus guiding an hierarchical shading algorithm. If the result is satisfactory, no further descent in the tree is needed, thus lowering the costs needed, else it goes down the tree until the required results are found. This tehcnique provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\
 
+\textbf{[stratum? explicar melhor, de forma mais simples, tem muitos termos que ``nascem'': pre-integration, noise, bias... ]}
 Agarwal et al.\cite{Agarwal03structuredimportance}
 %along with Kollig and Keller 
- managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result.\\ 
-This approach is based in stratifying and sampling of an environment map, thus allowing for pre-integration of the illumination within each stratum to eliminate the noise. This is done at the cost of additional bias. This reduces the number of samples in one or two orders of magnitude for an image with the same quality.\\
+ managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result. This approach is based in stratifying and sampling of an environment map, thus allowing for pre-integration of the illumination within each stratum to eliminate the noise. This is done at the cost of additional bias. This reduces the number of samples in one or two orders of magnitude for an image with the same quality. \\
 
-Keller \cite{Keller97instantradiosity} uses Instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution.
-Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays.\\
+Keller \cite{Keller97instantradiosity} uses instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution. Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays.\\
 
-Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures, like coincident or intersecting polygons.\\
+Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures, like coincident or intersecting polygons. \\
 
-
+Another way of reducing the computational costs is by interpolating illumination, in this paper it is presented a novel technique that integrates with the lightcuts framework they propose. \\
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 \section{The Lightcuts Approach}
 	\label{fig:scalability}
 \end{figure}
 
+
+
+
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 \section{Conclusions}
 %%%%%%%%%%%%%%%%%%%%%%%%%%
-Lightcuts are a new scalable approach to illumination. Through this algorithm for illumination computation the cost is greatly reduced, and is not limited to point lights only; it can be used in area lights, HDR environment maps and indirect illumination.\\
-The cost is reduced by approximation of the contribution of a group of lights. A cluster of lights is represented by a light whose emitted radiance is the sum of the contribution of the other lights in the cluster.\\These clusters are implemented in light trees, to reduce costs of finding new clusters. The trees are binary trees, in which a leaf is an individual light, and the interior nodes are light clusters.\\
+Lightcuts are a new scalable approach to illumination. Through this algorithm the computational costs grow sublinearly with the number of lights, and is not limited to point lights only: it can be used in area lights, HDR environment maps and indirect illumination. \\
+
+The cost is reduced by approximating the contribution of a group of lights. A cluster of lights is replaced by a light whose emitted radiance is the sum of the contribution of the lights in that cluster. This can greatly reduce the number of shadow rays required to shade a point. The cluster is build in order to minimize the error it produces, i.e. lights with similar orientation and proximity will most likely be grouped together into a single light. These clusters are organized into a light tree according to this error metric.  \textbf{[divagar + sobre a metrica?] }\\
+
+When shading a point, several clusters of lights are selected, this is defined as a cut in the light tree. A cut higher in the tree will have less clusters and so would produce more visual error but require less computational time. Area lights, HDR environment maps and indirect illumination are approximated by many point lights and then the normal lightcuts approach is followed. \\
+
+Often in the interior of a surface, shading points and its neighbors have very similar illumination, this is particularly true for diffuse surfaces. The reconstruction cuts technique exploits this coherence in the illumination and attempts to interpolate it across the shading points. The authors state that only highly glossy surfaces can't use this technique. \\
+
+
+
+
 
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%
 %\bibliographystyle{plain}
 %\end{thebibliography}
 %%%%%%%%%%%%%%%%%%%%%%%%%%
-\section{References}
-%%%%%%%%%%%%%%%%%%%%%%%%%%
 \bibliographystyle{plain}
 \bibliography{mybib}