RiscVsCiscPaper / paper.tex

Full commit
\usepackage{amssymb, amsmath}

\title{RISC vs CISC}


    In assembly code, certain combinations of operations occur fairly often.
    For instance, the result of a binary operation must often immediately be
    written back to memory, and returning from a function call is generally a
    standardised procedure.  The designer of a CPU instruction set can decide to
    provide assembly instructions that perform an entire action.  Alternatively,
    he can require that the individual instructions be written out.

    When an implementer chooses to provide instructions that do multiple things,
    we speak of a \textbf{complex instruction set computer}.  Instruction sets
    based on this tend to support high-level programming constructs by providing
    more control flow features.  This makes it easier to write assembly code by
    hand; however, around 1970 it became clear that most compilers only utilised
    a small subset of the provided features.

    In the opposite case, when an instruction set provides a small set of core
    features, we speak of a \textbf{reduced instruction set computer}.  Due to
    the smaller number of instructions, the resulting CPU design is simpler and
    the layout of the instructions can be more uniform.  However, an assembly
    programmer working with such an instruction set would have to spend
    considerable time writing things that would be a single instruction on a

    In this paper, we shall look in more depth at the patterns generally
    followed by RISCs and CISCs.  As an example on the CISC side we will use the
    x86 architecture, which is the most common architecture used in personal
    computers today.  As examples on the RISC side, we will use ARM, an
    architecture commonly used in smartphones.

    \chapter{The Origin of CISC}

    The first computer that stored its program in memory was the Manchester
    Small Scale Experimental Machine\cite{mssem}, which was made in 1948.  The
    first compiler appeared four years later\cite{first-compiler}, but the use
    of assembly language for programming persisted for significantly longer.

    With a significant portion of assembly code being written directly by
    programmers, as opposed to being autogenertaed, architecture designers
    would often optimise for the ease of the programmer at the expense of the
    complexity of the CPU\@.  This was often done by combining arithmetic and
    load/store operations, providing advanced forms of control flow, and the
    usage of more complex addressing modes.

    Apart from making the job of the assembly programmer easier, the lower
    number of instructions necessary to achieve the desired result resulted in
    a smaller code footprint.  As cache technology was still rather limited and
    main memory access was a bottleneck for execution speed, this was a more
    significant factor than the time required to decode an instruction.

    The 8086 processor provides examples of all three of the above methods.  The
    addition, subtraction and comparison operatiors could compare an immediate
    value to a value to stored in memory,  and apart from the standard
    conditional and unconditional jump instructions, the 8086 had instructions
    for looping on a condition.  Furthermore, the 8086 supported memory
    segmentation, with jump and return instructions having different forms for
    operating directly or indirectly, and within a segment or

    % No, that's not actually a word.

    Due to the late arrival of the term RISC, early architectures were not
    categorised as neither CISC nor RISC\@.  In the case of the earliest
    computers, the limited opcode size lead to computers having a fairly limited
    number of instructions; furthermore, the technology was simply not
    available in order to do complex instruction decoding. % CITE!
    % I'm not even sure this is true, going by pdp-11-cisc

            \emph{The Manchester Small Scale Experimental Machine}.\\
            University of Manchester, 1998
            Harold "Bud" Lawson and Howard Bromberg,\\
            \emph{The World's First COBOL Compilers}\\
            Stanford University, 1997
            \emph{8086 16-BIT HMOS Microprocessor}\\
            Intel, 1990
            John Morris,
            \emph{The Anatomy of Modern Processors}\\
            University of Auckland, 1999