Even Wiik Thomassen  committed a9e917d

FInished ghc chapter.

  • Participants
  • Parent commits 99ac395

Comments (0)

Files changed (3)

File acronyms.tex

 \newacronym{php}{PHP}{PHP: Hypertext Preprocessor}
 \newacronym{ssa}{SSA}{static single assigment}
 \newacronym{stg}{STG}{Spineless Tagless G-machine}
 \newacronym{vm}{VM}{virtual machine}
 The next step is the actual \textit{code generation}, which convert \gls{stg}
 into another intermediate representation, \gls{cmm}. \gls{cmm} is a variant of
-the C-- language, and is almost a subset of C that support tail recursive
-calls~\cite{terei, ghc}. \gls{cmm} is finally converted to
-object code by one of three low-level code generating backends:
+the C-{}- language, and is almost a subset of C that support proper tail
+calls~\cite{terei, ghc}. All remaining non-strict and functional aspects of
+Haskell are removed during code generation, which allow \gls{cmm} to be a
+simple intermediate language. \gls{cmm} is finally the input for one of three
+object code generating backends:
     \item The C code generator, which pretty-print \gls{cmm} to C code. The C
         code is then compiled with \gls{gcc}. Is very portable, as it can be
         used on most architectures that support \gls{gcc}, but the produced
         code is not as fast as the other two backends and the compilation
         process is significantly slower.
-    \item The native code generator that only support a few architectures, but
+    \item The \gls{ncg} that only support a few architectures, but
         produces faster code than the C backend.
     \item The LLVM code generator, which produce LLVM IR that is compiled with
         LLVM\@. This backend is described in more detail in \mypref{sec:llvm}.
 enables specialization based on call-patterns, which specialize recursive
 functions according to their argument shapes~\cite{jones07}.
-\mytodo{Describe some of the optimizations done at STG and Cmm level, such as
+\gls{ghc} also perform some optimizations at later stages of the compilation
+process. For example \textit{code generation} includes the \gls{tntc}
+optimization, which places meta-data of closures right before the code for
+the closure. \gls{tntc} allow accessing both closure meta-data and code from
+a single pointer~\cite{terei, peixotto}.
 Htrace is build with \gls{ghc} and LLVM --- traces are optimized with compiler
 optimizations provided by LLVM. \gls{ghc} required some small changes to
 enable dynamic linking of some C functions into LLVM\@. Htrace disables
-\gls{ghc}'s ``tables next to code'' optimization and uses a pure
-Haskell library for integer arithmetic, \mycode{integer-simple}. Two new
-passes were added to LLVM: inserting trace instrumentation and building
-traces. The LLVM bitcode interpreter, \mycode{lli}, was changed to add
-callbacks to the trace runtime. Htrace performs four main tasks:
+\gls{ghc}'s \gls{tntc} optimization\footnote{\gls{tntc} is described further
+in \mypref{sec:ghc-optimizations}} and uses a pure Haskell library for integer
+arithmetic, \mycode{integer-simple}. Two new passes were added to LLVM:
+inserting trace instrumentation and building traces. The LLVM bitcode
+interpreter, \mycode{lli}, was changed to add callbacks to the trace runtime.
+Htrace performs four main tasks:
     \item Create LLVM bitcode from program source.
     \item Create LLVM bitcode from Haskell libraries.
 \begin{table}[tbp] \footnotesize \center
-\caption{Overview over related work at optimizating Haskell\label{tab:related}}
+\caption{Overview over related work at optimizing Haskell\label{tab:related}}
 \begin{tabular}{l c c c l}
     Name & Trace-based & \gls{jit} & \gls{ghc} version & Creator \\