2.2. Machine Instructions

Machine instructions are classified into the following three categories:
  1. data transfer operations (memory \Leftrightarrow register, register \Leftrightarrow register)
  2. arithmetic logic operations (add, sub, and, or, xor, shift, etc)
  3. program control operations (branch, call, interrupt)

How the operands are specified is called the addressing mode. We will discuss addressing modes more later.

2.2.1. Complex Instruction Sets and Reduced Instruction Sets

Another important classification of types of computer architectures relates to the available set of instructions for the processor. Here we discuss the historical background and technical differences between two types of processors.

If memory is an expensive and limited resource, there is a large benefit in reducing the size of a program. During the 1960s and 1970s, memory was at a premium. Therefore, much effort was expended on minimizing the size of individual instructions and minimizing the number of instructions necessary to implement a program. During this time period, almost all computer designers believed that rich instruction sets would simplify compiler design and improve the quality of computer architecture.

New instructions were developed to replace frequently used sequences of instructions. For example, a loop variable is often decremented, followed by a branch operation if the result is positive. New architectures therefore introduced a single instruction to decrement a variable and branch conditionally based on the result. Some instructions came to be more like a procedure than a simple operation. Some of these powerful single instructions required four or more parameters. As an example, the IBM System/370 has a single instruction that copies a character string of arbitrary length from any location in memory to any other location in memory, while translating characters according to a table stored in memory.

Computers which feature a large number of complex instructions are classified as complex instruction set computers (CISC). Other examples of CISC computers include the Digital Equipment VAX and the Intel x86 line of processors. The DEC VAX has more than 200 instructions, dozens of distinct addressing modes and instructions with as many as six operands.

The complexity of CISC was accommodated by the introduction of microprogramming or microcode. Microcode composed of low-level hardware instructions that implement high-level instructions required by an architecture. Microcode was placed in ROM or control-store RAM (which is more expensive, but faster than the ferrite-core memory used in many computers).

However, not all computer designers fell in line with the CISC philosophy. Seymore Cray, for one, believed that complexity was bad, and continued to build the fastest computers in the world by using simple, register-oriented instruction sets. Cray was a proponent of the Reduced Instruction Set Computer (RISC), which is the antidote to CISC. The CDC 6600 and the Cray-1 supercomputer were the precursors of modern RISC architectures. In 1975, Cray made the following remarks about his computer design:

[Registers] made the instructions very simple. ... That is somewhat unique. Most machines have rather elaborate instruction sets involving many more memory references in the instructions than the machines I have designed. Simplicity, I guess, is a way of saying it. I am all for simplicity. If it’s very complicated, I cannot understand it.

Various technological changes in the 1980s made the architectural assumptions of the 1970s no longer valid.

  • Faster (10 times or more) and cheaper semiconductor memory and integrated circuits began to replace ferrite-core and transistor based discrete circuits.
  • The invention of cache memories substantially improved the speed of non-microcoded programs.
  • Compiler technology had progressed rapidly; optimizing compilers generate code that used only a small subset of most instruction sets.

A new set of simplified design criteria emerged:

  • Instructions should be simple unless there is a good reason for complexity. To be worthwhile, a new instruction that increases cycle time by 10% must reduce the total number of cycles executed by at least 10%.

  • Microcode is generally no faster than sequences of hardwired instructions. Moving software into microcode does not make it better. It just makes it harder to modify.

  • Fixed–format instructions and pipelined execution are more important than program size. As memory becomes cheaper and faster, the space/time trade-off resolved in favor of time — reducing space no longer decreases time.

    Pipelining relates to parallelizing the steps in the loop of instruction executing. The next instruction is fetched and decoded while the current instruction is executing.

  • Compiler technology should simplify instructions, rather than generate more complex instructions. Instead of adding a complicated microcoded instruction, optimizing compilers can generate sequences of simple, fast instructions to do the job. Operands can be kept in registers to increase speed even faster.