1. Nuno Silva
  2. Synthesis on Lightcuts

Source

Synthesis on Lightcuts / lightcuts.tex

\documentclass[a4paper]{article}
%\documentclass[twocolumn]{article}
\setlength{\columnsep}{5mm}

%\usepackage[portuges]{babel}
\usepackage[latin1]{inputenc}

\usepackage{url}
\usepackage{alltt}
\usepackage{listings}
\usepackage{fancyvrb}
\usepackage{graphicx}
\usepackage{algorithmic}
\usepackage[lined,algonl,boxed]{algorithm2e}
\usepackage{aeguill}  % usefull for pdflatex
\usepackage[a4paper,left=20mm,right=20mm,bottom=20mm,top=15mm]{geometry}

\lstset{
	language=C,
	basicstyle=\footnotesize,
	%numbers=left,
	%stepnumber=2,
	%numberstyle=\footnotesize, 
	showtabs=false,
	tabsize=2,
	breaklines=true,
	morecomment=[l]{//}
	}

\usepackage{hyperref}
\hypersetup{colorlinks,
            citecolor=black,
            filecolor=black,
            linkcolor=black,
            urlcolor=black,
            pdftex}

\title{ Synthesis on \\Lightcuts: A Scalable Approach to Illumination \\ 
	\small MSc in Informatics Engineering \\
	\small University of Minho - Portugal \\ \vspace{0.8cm}-- DRAFT --}

\author{
Bruno Gustavo Costa \\ \small pg17778@alunos.uminho.pt
\and
Nuno A. Silva\\ \small pg17455@alunos.uminho.pt
}

\newcommand{\np}{\indent \par}
%\newcommmand{\i}{\emph}
\begin{document}

\maketitle

%\abstract

%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%

\np Lightcuts are a scalable approach to illumination computation. It's  a new set of algorithms, which works by approximating illumination with many point lights, in an attempt to reduce computational cost.\\

Recurring to a binary tree and perceptual metrics, it partitions lights into groups in order to control the approximation error versus computational cost. It can use non diffuse materials and any geometry that can be ray traced.
The unification of different light sources into lightcuts has benefits, and uses other light sources to minimize errors.\\

It is also demonstrated reconstruction cuts, a technique to further reduce computational costs by sparsely calculating illumination over the image and then interpolate it whilst preserving high frequency details (shadow boundaries and glossy highlights). \\

%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Related Work}
%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% ***!! VER AUTORES !!*** %%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\np A lot of work has been done in order to achieve better results with a lower computational time in illumination.
Ward arranges lights by contribution and evaluates their visibility by a descending order until an error limit is reached. \\

Shirley divides the scene into cells and for each cell divides the lights into lists of importance. Monte Carlo techniques have a good performance if a good probability function is given, despite the robustness and efficiency in computation problems.  \\

Paquette presents an hierarchical approximation, with trees of similar lights. Provides error limits and good scalability, yet it is incapable of calculating shadows, which affects its application.  \\

Fernandez accelerates the process through caching visibility by light and blocker information in the scene, yet it leads to an excessive need for memory. \\

Wald manages to use many lights, but assumes the scene is quite occluded and only a small set contributes to each image. \\

Agarwal et al. along with Kollig and Keller managed to convert HDR environment maps to directional light points, yet many lights are needed for a quality result. \\ 

Keller uses Instant radiosity. This algorithm is based in light particle tracing through a stochastic method and virtual lights. It's a good candidate for lightcuts, despite previously being restricted to approximations. It is used by Wald in an interactive system, and he added techniques to increase resolution.
Photon Mapping is another approach, and requires and semi spheric gathering for good results (200 to 5000 rays), yet lightcuts uses less rays. \\

Hierarchical structures and clusters have been used (radiosity techniques) yet they calculate independently of vision, which increases computational time and are prone to geometrical failures. \\



%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Lightcuts Approach}
%%%%%%%%%%%%%%%%%%%%%%%%%%
\np
Calculating the radiance \texttt{L} at a surface point \texttt{x} can be a costly operation as the number of lights increases. The lightcuts approach attempts to reduce the cost of this operation by approximating the contribution of a group of lights. It is defined a cluster of lights, with one of the lights in the cluster being the representative light and the cluster position is the same as the representative light. The emitted radiance is now a sum of the contribution of the other lights in that cluster. \\

The cluster intensity can be precomputed and stored, thus turning the cost of evaluating each light to the cost of evaluating a single one. This approximation leads to some error, which must be relatively low in order to produce an image with no visible artifacts. The challenge is to group lights in a way so that the error is sufficiently low. \\

\begin{figure}[h]
	\centering
		\includegraphics[width=0.90\textwidth]{light_tree.PNG}
	\parbox{0.75\textwidth}{\caption{A light tree. The leafs are single lights the other nodes are the clusters. }}
	\label{fig:light_tree}
\end{figure}

Another challenge that the authors dealt with is the fact that no single cluster of lights would work well (maintain a low error) over the entire image. Dynamically finding a new cluster could easily prove prohibitively expensive, so it was implemented a light tree to rapidly compute locally adaptive cluster partitions. A light tree is a binary tree where the leaves are individual lights and the interior nodes are light clusters containing other clusters (or eventually individual lights) below them in the tree. Figure \ref{fig:light_tree} ilustrates a simple light tree. \\

A (horizontal) cut in that tree defines a partition of the lights into clusters. That cut is a set such that every path from the root to a leaf will contain exactly one node from the cut. The more nodes the cut contains the higher quality the illumination approximation will have despite requiring more computation time. \\

\begin{figure}[h]
	\centering
		\includegraphics{light_cut.PNG}
	\parbox{0.70\textwidth}{\caption{A light cut and the error they produce. The colored regions represent areas where the error is low. }}
	\label{fig:light_cut}
\end{figure}

Because the cuts can vary from point to point, visual artifacts can occur. Figure \ref{fig:light_cut} depicts some cuts and the visual error produced. A lightcut is chosen when the relative error for that cut is below a threshold. The program starts with a coarse cut and then progressively refines it until the error threshold is reached. For each node in the cut both its cluster contribution and an upper bound on its error are estimated. An additional stopping criteria was made, a maximum cut size, so that the algorithm would stop, eventually. \\


The implementation supports three types of point lights: omni, oriented, and directional, each having its own light tree. Ideally the light tree would group the point lights that have similar orientation and spacial proximity in order to improve the groups quality. The cluster error bounds (difference between the exact and approximate representations) is calculated by multiplying the upper bounds on the material, geometric and visibility terms. \\

The reconstruction cuts technique attempts to further reduce computational costs when going down the light tree (to reduce the amount of error). Given a set of nearby samples (locations where lightcuts have already been calculated), if all of them agree that a node is occluded, it is discarded. If a node's illumination is very similar across those samples, then that node is cheaply interpolated using impostor lights. Otherwise the normal lightcuts algorithm goes on. There are a few exceptions, for example no interpolation is allowed inside glossy highlights as it could lead to visible artifacts. Interpolating or discarding nodes, especially if high up in the tree, provides great cost savings. By exploiting spatial coherence, reconstruction cuts can shade points using far fewer shadow rays than lightcuts, this allows the generation of much higher quality images, with anti-aliasing, at a much lower cost than with lightcuts alone. \\




%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{More Illumination Applications}
%%%%%%%%%%%%%%%%%%%%%%%%%%
\np
After calculating points lights, results were observed in other light types. \textbf{[HUH?]}
\begin{itemize}

	\item Area Lights

		These are difficult to calculate. The most common solution is to use many point lights to approximate the area. This number varies depending on the configuration; with lightcuts we create multiple light points and the system chooses automatically and adaptively   the number of samples to use.

	\item HDR
	
		Since HDR relies on the creation of multiple directional lights in the environment map, few lights create artifacts and many lights increase the needed resources. Through lightcuts, this method is more efficient.

	\item Indirect Illumination
		
		Although quite realistic and high quality results, Indirect Illumination is quite expensive. In order to reduce costs yet keep the results free of artifacts. Instant Radiosity was an approach implemented in order to reduce costs, which can benefit from lightcuts.
		
\end{itemize}



%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Lightcut Results}
%%%%%%%%%%%%%%%%%%%%%%%%%%
\np
The lightcuts implementation was tested in 5 scenes. All results use a 2\% error threshold and a maximum cut size of 1000 nodes. All images in this section have a resolution of 640x480 with one eye ray per pixel. \\

\begin{figure}[h]
	\centering
		\includegraphics[width=0.50\textwidth]{kitchen.PNG}
	\parbox{0.70\textwidth}{\caption{Kitchen scene with direct light from 72 area sources. }}
	\label{fig:kitchen}
\end{figure}

In all tests there are no visual differences compared to the tests considering all lights (no clustering). The first scene (figure \ref{fig:kitchen}) contain several area lights that are approximated by a few point lights, in the second (figure \ref{fig:tableu_hdr}), the HDR environment map is approximated by a few thousand directional lights. The other scenes mix illumination sources and have a more complex geometry, the reconstruction cuts technique was used to reduce the aliasing without increasing the computation time too much. Reconstruction cuts allows to generate higher quality images, with anti-aliasing, at much lower cost than with lightcuts alone.  \\

\begin{figure}[h]
	\centering
		\includegraphics[width=0.50\textwidth]{tableu_hdr.PNG}
	\parbox{0.70\textwidth}{\caption{Tableau scene illuminated by an HDR environment map. In parentheses are averages over only pixels containing geometry.}}
	\label{fig:tableu_hdr}
\end{figure}


The image cost is mostly correlated with the amount of occlusion in the scene, the dominant cost comes from shadow rays - 50\%, computing the error bounds accounts for 20\% and 10\% is shading, the remaining time is used on various smaller operations. The scalability is superior (sub linear) to other algorithms such as [Ward 1994], like shown in figure \ref{fig:scalability}, much because of the use of cheap bounds on the maximum contribution (and error) from a cluster.  \\

\begin{figure}[h]
	\centering
		\includegraphics{scalability.PNG}
	\parbox{0.70\textwidth}{\caption{Lightcut performance scales fundamentally better (i.e. sublinearly) as the number of point lights increase.}}
	\label{fig:scalability}
\end{figure}



%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusions}
%%%%%%%%%%%%%%%%%%%%%%%%%%




%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{thebibliography}{}

%\end{thebibliography} 


\end{document}