Anton Golov  committed f5509fe

Worked more.

  • Participants
  • Parent commits 88e305a
  • Branches default

Comments (0)

Files changed (1)

     computers today.  As examples on the RISC side, we will use ARM, an
     architecture commonly used in smartphones.
+    % Are we still going to use this?
     \chapter{The Origin of CISC}
     The first computer that stored its program in memory was the Manchester
     available in order to do complex instruction decoding. % CITE!
     % I'm not even sure this is true, going by pdp-11-cisc
+    \chapter{RISC}
+    % ...
+    % ...
+    % ...
+    \chapter{Differences}
+    % My part: differences in software, hardware, memory usage
+    We shall now give a more in-depth comparison of RISC and CISC, starting at
+    a fairly high level and moving down towards the CPU internals, and finally
+    demonstrating some performance implications of these differences.
+    Some differences between RISC and CISC can be vaguely visible on the level
+    of C code.  It is more common for RISC architectures to be based on the
+    Harvard Architecture than for CISC systems\cite{arxiv-cisc-risc}.  Certain
+    operations that would work on a Von Neumann Architecture are unavailable:
+    for example, if function pointers are of a different size than data
+    pointers, then a conversion from \texttt{void(*)()} to \texttt{void*} and
+    back may not give the original value.
+    The main difference in software, however, is at the compiler level.  CISC
+    architectures, while complex at the hardware level, allow for significantly
+    simpler compiler design.  C code would often map fairly closely to the
+    resulting assembly thanks to the addressing modes and direct operations on
+    memory.  For instance, the C statement \texttt{a = a * b;} would likely map
+    to a single CISC instruction, as opposed to roughly four RISC
+    instructions\cite{arxiv-cisc-risc}.
+    On the other hand, RISC systems typically offer significantly more
+    registers.  This, combined with the fewer addressing modes, lead to a focus
+    on optimal register allocation and minimisation of load/store operations.
+    RISC CPUs are also less likely to have features such as
+    out-of-order execution, meaning these optimisations have to be performed at
+    compile-time.
+    The benefits of assembly providing high-level constructs also became less
+    significant as languages with a non-imperative style gained prominence.  The
+    translation step necessary for RISC architectures became small compared to
+    the translation step necessary for a conversion from a functional to an
+    imperative style, and advances in compiler design solved much of these
+    problems\cite{the-post-risc-era}.
+    Continuing down to the assembly level, we come across what could be called
+    the defining difference between CISC and RISC: a RISC instruction either
+    performs a computation or performs a memory access, but not both.  This is
+    generally called the load/store architecture, and from it follow many other
+    convention differences.  A load/store architecture does not support many of
+    the addressing modes that are generally present on a CISC system, and with
+    much fewer instructions to access memory, it is easier to make the duration
+    of instructions match more closely.  Fewer addressing modes also means
+    simpler instructions, and it becomes possible to efficiently make the size
+    of the instructions uniform\cite{the-post-risc-era}.
+    % Tjark: Programmers
+    Finally, we can talk about RISC and CISC on a hardware level.  The most
+    obvious difference between the two is the size of the decoder: the numerous
+    and variable-length instructions a CISC architecture generally has require
+    significantly more work to decode than the few standard-format instructions
+    of a RISC one.
+    As we have seen before, RISC systems often use the reduced decoder size in
+    order to fit more registers.  Reduction of power usage and heat production
+    are also common goals, leading to a preference for RISC in mobile devices.
+    We have also mentioned the preference of RISC for a Harvard Architecture;
+    this allows for a separate instruction and data cache, permitting
+    simultaneous access to both.
+    As RISC instructions tend to be simpler and, more importantly, have fewer
+    side effects, they are substantially easier to pipeline.  Superscalar
+    execution, the execution of multiple instructions at once, was initially a
+    RISC feature; however, the performance benefits it provided lead to it being
+    adopted across the board.
+    The cross-adoption of CISC and RISC features is a common matter.  While RISC
+    and CISC started out as significantly different systems from a hardware
+    point of view, advances in CPU design resulted in many features that lead to
+    the initial differences becoming less and less noticeable.  While the
+    smaller decoder in RISC systems was a significant distinction in the late
+    seventies and early eighties, new components such as caches and floating
+    point units made the initial differences much less significant.
+    In section~\ref{combination-cisc-risc} we will see the result of these
+    changes, and how they lead to the architectures of today, which generally
+    contain both RISC and CISC features.
+    \section{Performance}
+    % Tjark: Performance
+    Another key factor for performance is memory usage.  Initially, it would
+    seem like CISC is clearly the winner here: the ability to do more with fewer
+    instructions would imply smaller code footprints and thus less memory usage,
+    with all the benefits from the above.  The more advanced addressing modes
+    could also allow the same result in fewer operations; for instance, a
+    pointer to pointer addressing mode would make the storage of the
+    intermedient pointer unnecessary.
+    Initially this difference was significant, but the benefit diminished due to
+    lack of compiler support: many were not advanced enough to take advantage of
+    CISC features, and thus compiled code for CISC machines was not
+    significantly smaller.  CPU developers reacted by optimising for the
+    instructions generally used by compilers, and so while the CISC parts were
+    kept for backwards compatibility, the fastest code was often similar to RISC
+    code.\cite{dynamic-recompilation}
+    With the advance of caching technology, the extra cost in transistors that
+    CISC incurred was often an issue, as it meant that a CISC CPU could not also
+    feature an on-chip cache.  This gave RISC architectures a performance edge;
+    however, this was not directly caused by any feature of RISC itself, only by
+    the cache\cite{myth-and-reality}.  Once transistor count increased and CISC
+    architectures could support an on-chip cache, this performance factor
+    disappeared.
+    \chapter{Recent and Future Developments}
+    % ...
+    % ...
+    % ...
+    \section{Combination of CISC and RISC}
+    \label{combination-cisc-risc}
+    % ...
+    % ...
+    % ...
             \emph{The Manchester Small Scale Experimental Machine}.\\
             \emph{The Anatomy of Modern Processors}\\
             University of Auckland, 1999
+        \bibitem{dynamic-recompilation}
+            Michael Steil,\\
+            \emph{Dynamic Re-compilation of Binary RISC Code for CISC Architectures}\\
+            \url{}\\
+            Technische Universit\"at M\"unchen, 2004
+        \bibitem{myth-and-reality}
+            Yuan Weit et al.,\\
+            \emph{RISC vs CISC}\\
+            \url{}\\
+            University of Virginia
+        \bibitem{arxiv-cisc-risc}
+            Farhat Masood,\\
+            \emph{RISC and CISC}\\
+            \url{}\\
+            National University of Sciences and Technology
+        \bibitem{the-post-risc-era}
+            Hannibal,\\
+            \emph{RISC vs. CISC: The Post-RISC Era}\\
+            \url{}\\
+            ArsTechnica