Commits

Bruno Gustavo Costa committed 9b4a176

5ª, Sala de CG. Resumido a Related Work.

  • Participants
  • Parent commits 2630610

Comments (0)

Files changed (1)

 
 A lot of work has been done in order to achieve better results with a lower computational time in illumination. Most of the techniques are focused on individual lights and scale linearly with the number of lights. In order to deal with a larger amount of lights other techniques where developed.\\
 
-\textbf{[redigir melhor, tornar mais suscinto]}
-Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. This method provides an increase of speed ranging from 20\% to 80\%. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method is not based in testing sources for probability of visibility, but instead uses the probability of untested sources to estimate a contribution, thus allowing for smooth shading and no apparent compromise in accuracy. When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation. Ward's algorithm avoids stochastic sampling, therefore reducing noise, in order to create a more pleasing and fast result.\\
+Ward \cite{WardG94} presented an approach which trades accuracy (as opposed to storage) for speed. He also allows the user to control the reliability and accuracy of the technique with the use of an error factor. This method  tests the probability of untested sources to estimate a contribution. When testing, Ward realized that the more lights there are in a scene, the more efficient the algorithm becomes. This is because of more important lights being tested first, and less important being tested only if their visibility is considered important for the calculation.\\
 
 Paquette \cite{PPD98} presents an hierarchical approximation, with the creation of an octree of point lights in a scene. Each node is a virtual light source. Error bounds are determined when shading a point, thus guiding an hierarchical shading algorithm. If the result is satisfactory, no further descent in the tree is needed, thus lowering the costs needed, else it goes down the tree until the required results are found. This tehcnique provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.\\
 
-\textbf{[stratum? explicar melhor, de forma mais simples, tem muitos termos que ``nascem'': pre-integration, noise, bias... ]}
-Agarwal et al.\cite{Agarwal03structuredimportance}
-%along with Kollig and Keller 
- managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result. This approach is based in stratifying and sampling of an environment map, thus allowing for pre-integration of the illumination within each stratum to eliminate the noise. This is done at the cost of additional bias. This reduces the number of samples in one or two orders of magnitude for an image with the same quality. \\
-
+Agarwal et al.\cite{Agarwal03structuredimportance} managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result. This is a deterministic method,and this approach is based in stratifying and sampling of an environment map. Each stratum is then converted to a directional light at its center.
+\\
 Keller \cite{Keller97instantradiosity} uses instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution. Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays.\\
 
 Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures, like coincident or intersecting polygons. \\