Assessment item 3 – Assignment 1: Computers, data and programming
back to top
Value:
15%
Due Date: 26-Apr-2020
Return Date: 19-May-2020
Length:
Submission method options: Alternative submission method
TASK
back to top
Overview
This assessment consists of three questions. The first one looks into the overview of Computer Systems. The second one looks into how computers represent data and how we can interconvert these representations. The last question will give you experience with writing MARIE programs. The lecturer may ask you to explain the answers of Q3 in the classroom.
Question 1 – Historical Overview of Computer Systems [10 marks]
(a) Under the von Neumann architecture, a program and its data are both stored in memory. It is, therefore, possible for a program, thinking a memory location holds a piece of data when it contains a program instruction, to accidentally (or on purpose) modify itself. What implications does this present to you as a programmer? [4 marks]
(b) Reflecting on the technical implications of Moore’s Law, discuss the following two cases: [6 marks]
(i) After successfully completing your IT Fundamentals class, you have a brilliant idea for a new chip design that would make a processor six times faster than the fastest ones on the market today. Unfortunately, it will take you four and a half years to save the money, create the prototype, and build a finished product. If Moore’s Law holds, should you spend your money developing and producing your chip or invest in some other venture?
(ii) Suppose we have a problem that currently takes 100,000 hours of computer time using current technology to solve. Which of the following would give us the solution first: (1) Replace the algorithm used in the current solution by one that runs twice as fast and run it on the current technology, or (2) Wait 3 years, assuming Moore’s law doubles the performance of a computer every 18 months, and find the solution using the current algorithm with the new technology?
Question 2 – Data Representation in Computers [20 marks]
(a) You have stumbled on an unknown civilization while sailing around the world. The people, who call themselves Zebronians, do math using 40 separate characters (probably because there are 40 stripes on a zebra). They would very much like to use computers but would need a computer to do Zebronian math, which would mean a computer that could represent all 40 characters. You are a computer designer and decide to help them. You decide the best thing is to use BCZ, Binary Coded Zebronian (which is like BCD except it codes Zebronian, not Decimal). How many bits will you need to represent each character if you want to use the minimum number of bits? [5 marks]
(b) If the floating-point number representation on a certain system has a sign bit, a 3-bit exponent and a 4-bit significand:
(i) What is the largest positive and the smallest positive number that can be stored on this system if the storage is normalized? (Assume no bits are implied, there is no bias, exponents use two’s complement notation, and exponents of all zeros and all ones are allowed.) [5 marks]
(ii) What bias should be used in the exponent if we prefer all exponents to be non-negative? [2 marks]
(c) Hex value 0xC29C3480 represents an IEEE-754 single-precision (32 bit) floating-point number. Work out the equivalent decimal number. Show all workings (e.g. converting exponent and mantissa). [8 marks]
Question 3 – MARIE Assembly [20 marks]
(a) Consider the following MARIE Code:
100 If, Load X /Load the first value
101 Subt Y /Subtract the value of Y, store result in AC
102 Skipcond 400 /If AC=0 (X=Y), skip the next instruction
103 Jump Else /Jump to Else part if AC is not equal to 0
104 Then, Load X /Reload X so it can be doubled
105 Add X /Double X
106 Store X /Store the new value
107 Jump Endif /Skip over the false, or else, part to the end of if
108 Else, Load Y /Start the else part by loading Y
109 Subt X /Subtract X from Y
10A Store Y /Store Y-X in Y
10B Endif, Halt /Terminate program (it doesn’t do much!)
10C X, Dec 12 /Assume these values for X and Y
10D Y, Dec 20
Find the values stored in the following registers after the execution of “Skipcond 400” instruction: [5 marks]
(i) PC
(ii) IR
(iii) MAR
(iv) MBR
(v) AC
(b) Suppose we add the following instruction to MARIE’s ISA:
JumpIOffset X
This instruction will jump to the address calculated by going to address X, then adding the value found there to the value in the AC. Show how this instruction would be executed using RTN. [5 marks]
(c) Write a MARIE program to allow the user to input 8 integers (positive, negative, or zero) and then find the smallest and the largest and print each of these out.
As an example, if the user enters the following decimal numbers as input (one after the other)
23, -6, 78, 0, 36, 3, -250, –5
the program would output the following values as the maximum and minimum, respectively:
78
-250
Assume that the user will always provide valid numbers as input; that is, do not worry about dealing with invalid input data.
Write comments within your program so that a reader can understand it easily. [10 marks]
RATIONALE
back to top
SUBJECT LEARNING OUTCOMES
This assessment task will assess the following learning outcome/s:
· be able to investigate and describe the essential elements of a computer and their functionalities.
· be able to apply an understanding of data representations and calculations to practical situations.
· be able to develop an elementary computer program.
GRADUATE LEARNING OUTCOMES
This task also contributes to the assessment of the following
CSU Graduate Learning Outcome/s
:
· Academic Literacy and Numeracy (Knowledge) – Charles Sturt Graduates understand the use and structure of appropriate language in written, oral, visual, mathematical, and multi-modal communication.
· Academic Literacy and Numeracy (Skill) – Charles Sturt Graduates demonstrate the literacy and numeracy skills necessary to understand and interpret information and communicate effectively according to the context.
· Information and Research Literacies (Knowledge) – Charles Sturt Graduates demonstrate that disciplinary knowledge is developed through research and evidence.
Chapter 4
MARIE: An Introduction
to a Simple Computer
23
4.8 MARIE
• We can now bring together many of the ideas that
we have discussed to this point using a very simple
model computer.
• Our model computer, the Machine Architecture that
is Really Intuitive and Easy, MARIE, was designed
for the singular purpose of illustrating basic computer
system concepts.
• While this system is too simple to do anything useful
in the real world, a deep understanding of its
functions will enable you to comprehend system
architectures that are much more complex.
24
The MARIE architecture has the following
characteristics:
• Binary, two’s complement data representation.
• Stored program, fixed word length data and
instructions.
• 4K words of word-addressable main memory.
• 16-bit data words.
• 16-bit instructions, 4 for the opcode and 12 for the
address.
• A 16-bit arithmetic logic unit (ALU).
• Seven registers for control and data movement.
4.8 MARIE
25
MARIE’s seven registers are:
• Accumulator, AC, a 16-bit register that holds a
conditional operator (e.g., “less than”) or one operand
of a two-operand instruction.
• Memory address register, MAR, a 12-bit register that
holds the memory address of an instruction or the
operand of an instruction.
• Memory buffer register, MBR, a 16-bit register that
holds the data after its retrieval from, or before its
placement in memory.
4.8 MARIE
26
MARIE’s seven registers are:
• Program counter, PC, a 12-bit register that holds the
address of the next program instruction to be
executed.
• Instruction register, IR, which holds an instruction
immediately preceding its execution.
• Input register, InREG, an 8-bit register that holds data
read from an input device.
• Output register, OutREG, an 8-bit register, that holds
data that is ready for the output device.
4.8 MARIE
27
This is the MARIE architecture shown graphically.
4.8 MARIE
28
• The registers are interconnected, and connected
with main memory through a common data bus.
• Each device on the bus is identified by a unique
number that is set on the control lines whenever that
device is required to carry out an operation.
• Separate connections are also provided between the
accumulator and the memory buffer register, and the
ALU and the accumulator and memory buffer
register.
• This permits data transfer between these devices
without use of the main data bus.
4.8 MARIE
29
This is the MARIE data
path shown graphically.
4.8 MARIE
30
• A computer’s instruction set architecture (ISA)
specifies the format of its instructions and the
primitive operations that the machine can perform.
• The ISA is an interface between a computer’s
hardware and its software.
• Some ISAs include hundreds of different instructions
for processing data and controlling program
execution.
• The MARIE ISA consists of only thirteen instructions.
4.8 MARIE
31
• This is the format
of a MARIE instruction:
• The fundamental MARIE instructions are:
4.8 MARIE
32
• This is a bit pattern for a LOAD instruction as it would
appear in the IR:
• We see that the opcode is 1 and the address from
which to load the data is 3.
4.8 MARIE
33
• This is a bit pattern for a SKIPCOND instruction as it
would appear in the IR:
• We see that the opcode is 8 and bits 11 and 10 are
10, meaning that the next instruction will be skipped if
the value in the AC is greater than zero.
What is the hexadecimal representation of this instruction?
4.8 MARIE
34
• Each of our instructions actually consists of a
sequence of smaller instructions called
microoperations.
• The exact sequence of microoperations that are
carried out by an instruction can be specified using
register transfer language (RTL).
• In the MARIE RTL, we use the notation M[X] to
indicate the actual data value stored in memory
location X, and to indicate the transfer of bytes to a
register or memory location.
4.8 MARIE
35
• The RTL for the LOAD instruction is:
• Similarly, the RTL for the ADD instruction is:
MAR X
MBR M[MAR]
AC AC + MBR
MAR X
MBR M[MAR]
AC MBR
4.8 MARIE
36
• Recall that SKIPCOND skips the next instruction
according to the value of the AC.
• The RTL for the this instruction is the most complex
in our instruction set:
If IR[11 – 10] = 00 then
If AC < 0 then PC PC + 1
else If IR[11 - 10] = 01 then
If AC = 0 then PC PC + 1
else If IR[11 - 10] = 11 then
If AC > 0 then PC PC + 1
4.8 MARIE
37
4.9 Instruction Processing
• The fetch-decode-execute cycle is the series of steps
that a computer carries out when it runs a program.
• We first have to fetch an instruction from memory,
and place it into the IR.
• Once in the IR, it is decoded to determine what
needs to be done next.
• If a memory value (operand) is involved in the
operation, it is retrieved and placed into the MBR.
• With everything in place, the instruction is executed.
The next slide shows a flowchart of this process.
38
4.9 Instruction Processing
39
• All computers provide a way of interrupting the
fetch-decode-execute cycle.
• Interrupts occur when:
– A user break (e.,g., Control+C) is issued
– I/O is requested by the user or a program
– A critical error occurs
• Interrupts can be caused by hardware or
software.
– Software interrupts are also called traps.
4.9 Instruction Processing
40
• Interrupt processing involves adding another step to
the fetch-decode-execute cycle as shown below.
The next slide shows a flowchart of “Process the interrupt.”
4.9 Instruction Processing
41
4.9 Instruction Processing
42
• For general-purpose systems, it is common to
disable all interrupts during the time in which an
interrupt is being processed.
– Typically, this is achieved by setting a bit in the flags
register.
• Interrupts that are ignored in this case are called
maskable.
• Nonmaskable interrupts are those interrupts that
must be processed in order to keep the system in
a stable condition.
4.9 Instruction Processing
43
• Interrupts are very useful in processing I/O.
• However, interrupt-driven I/O is complicated, and
is beyond the scope of our present discussion.
– We will look into this idea in greater detail in Chapter 7.
• MARIE, being the simplest of simple systems,
uses a modified form of programmed I/O.
• All output is placed in an output register, OutREG,
and the CPU polls the input register, InREG, until
input is sensed, at which time the value is copied
into the accumulator.
4.9 Instruction Processing
44
• Consider the simple MARIE program given below.
We show a set of mnemonic instructions stored at
addresses 0x100 – 0x106 (hex):
4.10 A Simple Program
45
• Let’s look at what happens inside the computer when
our program runs.
• This is the LOAD 104 instruction:
4.10 A Simple Program
46
• Our second instruction is ADD 105:
4.10 A Simple Program
Data Representation
in Computer
Systems
Chapter 2
2
Chapter 2 Objectives
• Understand the fundamentals of numerical data
representation and manipulation in digital
computers.
• Master the skill of converting between various
radix systems.
• Understand how errors can occur in computations
because of overflow and truncation.
3
• Understand the fundamental concepts of floating-
point representation.
• Gain familiarity with the most popular character
codes.
Chapter 2 Objectives
4
2.1 Introduction
• A bit is the most basic unit of information in a
computer.
– It is a state of “on” or “off” in a digital circuit.
– Sometimes these states are “high” or “low” voltage
instead of “on” or “off..”
• A byte is a group of eight
bits.
– A byte is the smallest possible addressable unit of
computer storage.
– The term, “addressable,” means that a particular byte can
be retrieved according to its location in memory.
2.1 Introduction
• A word is a contiguous group of bytes.
– Words can be any number of bits or bytes.
– Word sizes of 16, 32, or 64 bits are most common.
– In a word-addressable system, a word is the smallest
addressable unit of storage.
• A group of four bits is called a nibble.
– Bytes, therefore, consist of two nibbles: a “high-order
nibble,” and a “low-order” nibble.
5
2.2 Positional Numbering Systems
• Bytes store numbers using the position of each
bit to represent a power of 2.
– The binary system is also called the base-2
system.
– Our decimal system is the base-10 system. It uses
powers of 10 for each position in a
number.
– Any integer quantity can be represented exactly using
any base (or radix).
6
7
• The decimal number 947 in powers of 10 is:
• The decimal number 5836.47 in powers of 10 is:
5 10 3 + 8 10 2 + 3 10 1 + 6 10 0
+ 4 10 -1 + 7 10 -2
9 10 2 + 4 10 1 + 7 10 0
2.2 Positional Numbering Systems
8
• The binary number 11001 in powers of 2 is:
• When the radix of a number is something other
than 10, the base is denoted by a subscript.
– Sometimes, the subscript 10 is added for emphasis:
110012 = 25
10
1 2 4 + 1 2 3 + 0 2 2 + 0 2 1 + 1 2 0
= 16 + 8 + 0 + 0 + 1 =
25
2.2 Positional Numbering Systems
9
• Because binary numbers are the basis for all data
representation in digital computer systems, it is
important that you become proficient with this radix
system.
• Your knowledge of the binary numbering system
will enable you to understand the operation of all
computer components as well as the design of
instruction set architectures.
2.3 Converting Between Bases
10
• In an earlier slide, we said that every integer value
can be represented exactly using any radix
system.
• There are two methods for radix conversion: the
subtraction method and the division remainder
method.
• The subtraction method is more intuitive, but
cumbersome. It does, however reinforce the ideas
behind radix mathematics.
2.3 Converting Between Bases
11
• Suppose we want
to
convert the decimal
number 190 to base 3.
– We know that 3 5 = 243 so
our result will be less than
six digits wide. The largest
power of 3 that we need is
therefore 3 4 = 81, and
81 2 = 162.
– Write down the 2 and
subtract 162 from 190,
giving 28.
2.3 Converting Between Bases
12
• Converting 190 to base 3…
– The next power of 3 is
3 3 = 27. We’ll need one
of these, so we subtract
27
and write down the numeral
1 in our result.
– The next power of 3, 3 2 =
9, is too large, but we have
to assign a placeholder of
zero and carry down the
1.
2.3 Converting Between Bases
13
• Converting 190 to base 3…
– 3 1 = 3 is again too large,
so we assign a zero
placeholder.
– The last power of 3, 3 0 =
1, is our last choice, and it
gives us a difference of
zero.
– Our result, reading from
top to
bottom is:
19010 = 210013
2.3 Converting Between Bases
14
• Another method of converting integers from
decimal to some other radix uses division.
• This method is mechanical and easy.
• It employs the idea that successive division by a
base is equivalent to successive subtraction by
powers of the base.
• Let’s use the division remainder method to again
convert 190 in decimal to base 3.
2.3 Converting Between Bases
15
• Converting 190 to base 3…
– First we take the number
that we wish to convert and
divide it by the radix in
which we want to express
our result.
– In this case, 3 divides 190
63 times, with a remainder
of 1.
– Record the quotient and the
remainder.
2.3 Converting Between Bases
16
• Converting 190 to base 3…
– 63 is evenly divisible by 3.
– Our remainder is zero, and
the quotient is 21.
2.3 Converting Between Bases
17
• Converting 190 to base 3…
– Continue in this way until
the quotient is zero.
– In the final calculation, we
note that 3 divides 2 zero
times with a remainder of 2.
– Our result, reading from
bottom to top is:
19010 = 210013
2.3 Converting Between Bases
18
• Fractional values can be approximated in all
base systems.
• Unlike integer values, fractions do not
necessarily have exact representations under all
radices.
• The quantity ½ is exactly representable in the
binary and decimal systems, but is not in the
ternary (base 3) numbering system.
2.3 Converting Between Bases
19
• Fractional decimal values have nonzero digits to
the right of the
decimal point.
• Fractional values of other radix systems have
nonzero digits to the right of the radix point.
• Numerals to the right of a radix point represent
negative powers of the radix:
0.4710 = 4 10
-1 + 7 10 -2
0.112 = 1 2
-1 + 1 2 -2
= ½ + ¼
= 0.5 + 0.25 = 0.
75
2.3 Converting Between Bases
20
• As with whole-number conversions, you can use
either of two methods: a subtraction method or an
easy multiplication method.
• The subtraction method for fractions is identical to
the subtraction method for whole
numbers.
Instead of subtracting positive powers of the target
radix, we subtract negative powers of the
radix.
• We always start with the largest value first, n -1,
where n is our radix, and work our way along
using larger negative
exponents.
2.3 Converting Between Bases
21
• The calculation to the
right is an example of
using the subtraction
method to convert the
decimal 0.8125 to
binary.
– Our result, reading from
top to bottom is:
0.812510 = 0.11012
– Of course, this method
works with any base,
not just binary.
2.3 Converting Between Bases
22
• Using the multiplication
method to convert the
decimal 0.8125 to binary,
we multiply by the radix 2.
– The first product carries
into the units place.
2.3 Converting Between Bases
2.3 Converting Between Bases
• Converting 0.8125 to
binary . . .
– Ignoring the value in
the units place at each
step, continue
multiplying each
fractional part by the
radix.
23
24
• Converting 0.8125 to binary . . .
– You are finished when the
product is zero, or until you
have reached the desired
number of binary places.
– Our result, reading from top to
bottom is:
0.812510 = 0.11012
– This method also works with
any base. Just use the target
radix as the multiplier.
2.3 Converting Between Bases
25
• The binary numbering system is the most
important radix system for digital computers.
• However, it is difficult to read long strings of binary
numbers — and even a modestly-sized decimal
number becomes a very long binary number.
– For example: 110101000110112 = 1359510
• For compactness and ease of reading, binary
values are usually expressed using the
hexadecimal, or base-16, numbering system.
2.3 Converting Between Bases
26
• The hexadecimal numbering system uses the
numerals 0 through 9 and the letters A through F.
– The decimal number 12 is C16.
– The decimal number 26 is 1A16.
• It is easy to convert between base 16 and base 2,
because 16 = 24.
• Thus, to convert from binary to hexadecimal, all
we need to do is group the binary digits into
groups of four.
A group of four binary digits is called a hextet
2.3 Converting Between Bases
27
• Using groups of hextets, the binary number
110101000110112 (= 1359510) in hexadecimal is:
• Octal (base 8) values are derived from binary by
using groups of three bits (8 = 23):
Octal was very useful when computers used six-bit words.
If the number of bits is not a
multiple of 4, pad on the left
with zeros.
2.3 Converting Between Bases
28
2.4 Signed Integer Representation
• The conversions we have so far presented have
involved only un
signed numbers.
• To represent signed integers, computer systems
allocate the high-order bit to indicate the sign of a
number.
– The high-order bit is the leftmost bit. It is also called the
most significant bit.
– 0 is used to indicate a positive number; 1 indicates a
negative number.
• The remaining bits contain the value of the number
(but this can be interpreted different ways)
29
• There are three ways in which signed binary
integers may be expressed:
– Signed magnitude
– One’s complement
– Two’s complement
• In an 8-bit word, signed magnitude
representation places the absolute value of
the number in the 7 bits to the right of the
sign bit.
2.4 Signed Integer Representation
30
• For example, in 8-bit signed magnitude
representation:
+3 is: 00000011
– 3 is: 10000011
• Computers perform arithmetic operations on
signed magnitude numbers in much the same
way as humans carry out pencil and paper
arithmetic.
– Humans often ignore the signs of the operands
while performing a calculation, applying the
appropriate sign after the calculation is complete.
2.4 Signed Integer Representation
31
• Binary addition is as easy as it gets. You need
to know only four rules:
0 + 0 = 0 0 + 1 = 1
1 + 0 = 1 1 + 1 = 10
• The simplicity of this system makes it possible
for digital circuits to carry out arithmetic
operations.
– We will describe these circuits in Chapter 3.
Let’s see how the addition rules work with signed
magnitude numbers . . .
2.4 Signed Integer Representation
32
•
Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75
and 46.
• First, convert 75 and 46 to
binary, and arrange as a sum,
but separate the (positive)
sign bits from the magnitude
bits.
2.4 Signed Integer Representation
33
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• Just as in decimal arithmetic,
we find the sum starting with
the rightmost bit and work left.
2.4 Signed Integer Representation
34
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• In the second bit, we have a
carry, so we note it above the
third bit.
2.4 Signed Integer Representation
35
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• The third and fourth bits also
give us carries.
2.4 Signed Integer Representation
36
• Example:
– Using signed magnitude binary
arithmetic, find the sum of 75
and 46.
• Once we have worked our way
through all eight bits, we are
done.
In this example, we were careful to pick two values whose
sum would fit into seven bits. If that is not the case, we
have a
problem.
2.4 Signed Integer Representation
37
• Example:
– Using signed magnitude binary
arithmetic, find the sum of 107
and 46.
• We see that the carry from the
seventh bit overflows and is
discarded, giving us the
erroneous result: 107 + 46 = 25.
2.4 Signed Integer Representation
38
• The signs in signed
magnitude representation
work just like the signs in
pencil and paper arithmetic.
– Example: Using signed
magnitude binary arithmetic,
find the sum of – 46 and – 25.
• Because the signs are the same, all we do is
add the numbers and supply the negative sign
when we are done.
2.4 Signed Integer Representation
39
• Mixed sign addition (or
subtraction) is done the
same way.
– Example: Using signed
magnitude binary arithmetic,
find the sum of 46 and – 25.
• The sign of the result gets the sign of the number
that is larger.
– Note the “borrows” from the second and sixth bits.
2.4 Signed Integer Representation
40
• Signed magnitude representation is easy for
people to understand, but it requires
complicated computer hardware.
• Another disadvantage of signed magnitude is
that it allows two different representations for
zero: positive zero and negative zero.
• For these reasons (among others) computers
systems employ complement systems for
numeric value representation.
2.4 Signed Integer Representation
41
• In complement systems, negative values are
represented by some difference between a
number and its base.
• The diminished radix complement of a non-zero
number N in base r with d digits is (rd – 1) – N
• In the binary system, this gives us one’s
complement. It amounts to little more than flipping
the bits of a binary number.
2.4 Signed Integer Representation
42
• For example, using 8-bit one’s complement
representation:
+ 3 is: 00000011
– 3 is: 11111
100
• In one’s complement representation, as with
signed magnitude, negative values are
indicated by a 1 in the high order bit.
• Complement systems are useful because they
eliminate the need for subtraction. The
difference of two values is found by adding the
minuend to the complement of the subtrahend.
2.4 Signed Integer Representation
43
• With one’s complement
addition, the carry bit is
“carried around” and added
to the
sum.
– Example: Using one’s
complement binary arithmetic,
find the sum of 48 and – 19
We note that 19 in binary is 00010011,
so -19 in one’s complement is: 11101100.
2.4 Signed Integer Representation
44
• Although the “end carry around” adds some
complexity, one’s complement is simpler to
implement than signed magnitude.
• But it still has the disadvantage of having two
different representations for zero: positive zero
and negative zero.
• Two’s complement solves this problem.
• Two’s complement is the radix complement of the
binary numbering system; the radix complement
of a non-zero number N in base r with d digits is
rd – N.
2.4 Signed Integer Representation
45
• To express a value in two’s complement
representation:
– If the number is positive, just convert it to binary and
you’re done.
– If the number is negative, find the one’s complement of
the number and then add 1.
• Example:
– In 8-bit binary, 3 is:
00000011
– -3 using one’s complement representation is:
11111100
– Adding 1 gives us -3 in two’s complement form:
11111101.
2.4 Signed Integer Representation
46
• With two’s complement arithmetic, all we do is add
our two binary numbers. Just discard any carries
emitting from the high order bit.
We note that 19 in binary is: 00010011,
so -19 using one’s complement is: 11101100,
and -19 using two’s complement is: 11101101.
– Example: Using one’s
complement binary
arithmetic, find the sum of
48 and – 19.
2.4 Signed Integer Representation
47
• Excess-M representation (also called offset binary
representation) is another way for unsigned binary
values to represent signed integers.
– Excess-M representation is intuitive because the binary
string with all 0s represents the smallest number, whereas
the binary string with all 1s represents the largest value.
• An unsigned binary integer M (called the bias)
represents the value 0, whereas all zeroes in the bit
pattern represents the integer 2M.
• The integer is interpreted as positive or negative
depending on where it falls in the range.
2.4 Signed Integer Representation
48
• If n bits are used for the binary representation, we
select the bias in such a manner that we split the
range equally.
• Typically we choose a bias of 2n-1 – 1.
– For example, if we were using 4-bit representation, the
bias should be 24 -1 – 1 = 7.
• Just as with signed magnitude, one’s complement,
and two’s complement, there is a specific range of
values that can be expressed in n bits.
2.4 Signed Integer Representation
49
• The unsigned binary value for a signed integer using
excess-M representation is determined simply by
adding M to that integer.
– For example, assuming that we are using excess-7
representation, the integer 010 is represented as 0 – 7 = 710
= 01112.
– The integer 310 is represented as 3 + 7 = 10 10 = 10102.
– The integer -7 is represented as -7 + 7 = 0 10 = 00002 .
– To find the decimal value of the excess-7 binary number
11112 subtract 7: 11112 = 1510 and 15 – 7 = 8; thus 11112,
in excess-7 is +810.
2.4 Signed Integer Representation
50
• Lets compare our representations:
2.4 Signed Integer Representation
51
• When we use any finite number of bits to
represent a number, we always run the risk of
the result of our calculations becoming too large
or too small to be stored in the computer.
• While we can’t always prevent overflow, we can
always detect overflow.
• In complement arithmetic, an overflow condition
is easy to detect.
2.4 Signed Integer Representation
52
• Example:
– Using two’s complement binary
arithmetic, find the sum of 107
and 46.
• We see that the nonzero carry
from the seventh bit overflows into
the sign bit, giving us the
erroneous result: 107 + 46 = -103.
But overflow into the sign bit does not
always mean that we have an error.
2.4 Signed Integer Representation
53
• Example:
– Using two’s complement binary
arithmetic, find the sum of 23 and
-9.
– We see that there is carry into the
sign bit and carry out. The final
result is correct: 23 + (-9) = 14.
Rule for detecting signed two’s complement overflow: When
the “carry in” and the “carry out” of the sign bit differ,
overflow has occurred. If the carry into the sign bit equals the
carry out of the sign bit, no overflow has occurred.
2.4 Signed Integer Representation
54
• Signed and unsigned numbers are both useful.
– For example, memory addresses are always unsigned.
• Using the same number of bits, unsigned integers
can express twice as many “positive” values as
signed numbers.
• Trouble arises if an unsigned value “wraps around.”
– In four bits: 1111 + 1 = 0000.
• Good programmers stay alert for this kind of
problem.
2.4 Signed Integer Representation
55
• Research into finding better arithmetic algorithms
has continued for over 50 years.
• One of the many interesting products of this work
is Booth’s algorithm.
• In most cases, Booth’s algorithm carries out
multiplication faster and more accurately than
pencil-and-paper methods.
• The general idea is to replace arithmetic
operations with bit shifting to the extent possible.
2.4 Signed Integer Representation
56
In Booth’s algorithm:
• If the current multiplier bit is
1 and the preceding bit was
0, subtract the multiplicand
from the product
• If the current multiplier bit is
0 and the preceding bit was
1, we add the multiplicand to
the product
• If we have a 00 or 11 pair,
we simply shift.
• Assume a mythical “0”
starting bit
• Shift after each step
0011
x 0110
+ 0000 (shift)
– 0011 (subtract)
+ 0000 (shift)
+ 0011 (add) .
00010010
We see that 3 6 = 18!
2.4 Signed Integer Representation
57
• Here is a larger
example.
00110
101
x 01111110
+ 0000000000000000
+ 111111111001011
+ 00000000000000
+ 0000000000000
+ 000000000000
+ 00000000000
+ 0000000000
+ 000110101_______
10001101000010110
Ignore all bits over 2n.
53 126 = 6678!
2.4 Signed Integer Representation
58
• Overflow and carry are tricky ideas.
• Signed number overflow means nothing in the
context of unsigned numbers, which set a carry
flag instead of an overflow flag.
• If a carry out of the leftmost bit occurs with an
unsigned number, overflow has occurred.
• Carry and overflow occur independently of each
other.
The table on the next slide summarizes these ideas.
2.4 Signed Integer Representation
59
2.4 Signed Integer Representation
60
• We can do binary multiplication and division by 2
very easily using an arithmetic shift operation
• A left arithmetic shift inserts a 0 in for the
rightmost bit and shifts everything else left one
bit; in effect, it multiplies by 2
• A right arithmetic shift shifts everything one bit to
the right, but copies the sign bit; it divides by 2
• Let’s look at some examples.
2.4 Signed Integer Representation
61
Example:
Multiply the value 11 (expressed using 8-bit signed two’s
complement representation) by 2.
We start with the binary value for 11:
00001011 (+11)
We shift left one place, resulting in:
00010110 (+22)
The sign bit has not changed, so the value is valid.
To multiply 11 by 4, we simply perform a left shift twice.
2.4 Signed Integer Representation
62
Example:
Divide the value 12 (expressed using 8-bit signed two’s
complement representation) by 2.
We start with the binary value for 12:
00001100 (+12)
We shift left one place, resulting in:
00000110 (+6)
(Remember, we carry the sign bit to the left as we shift.)
To divide 12 by 4, we right shift twice.
2.4 Signed Integer Representation
63
• The signed magnitude, one’s complement,
and two’s complement representation that we
have just presented deal with signed integer
values only.
• Without modification, these formats are not
useful in scientific or business applications
that deal with real number values.
• Floating-point representation solves this
problem.
2.5 Floating-Point Representation
64
2.5 Floating-Point Representation
• If we are clever programmers, we can perform
floating-point calculations using any integer format.
• This is called floating-point emulation, because
floating point values aren’t stored as such; we just
create programs that make it seem as if floating-
point values are being used.
• Most of today’s computers are equipped with
specialized hardware that performs floating-point
arithmetic with no special programming required.
65
• Floating-point numbers allow an arbitrary
number of decimal places to the right of the
decimal point.
– For example: 0.5 0.25 = 0.125
• They are often expressed in scientific notation.
– For example:
0.125 = 1.25 10-1
5,000,000 = 5.0 106
2.5 Floating-Point Representation
66
• Computers use a form of scientific notation for
floating-point representation
• Numbers written in scientific notation have three
components:
2.5 Floating-Point Representation
67
• Computer representation of a floating-point
number consists of three fixed-size fields:
• This is the standard arrangement of these fields.
Note: Although “significand” and “mantissa” do not technically mean the same
thing, many people use these terms interchangeably. We use the term “significand”
to refer to the fractional part of a floating point number.
2.5 Floating-Point Representation
68
• The one-bit sign field is the sign of the stored value.
• The size of the exponent field determines the range
of values that can be represented.
• The size of the significand determines the precision
of the representation.
2.5 Floating-Point Representation
69
• We introduce a hypothetical “Simple Model” to
explain the concepts
• In this model:
– A floating-point number is 14 bits in length
– The exponent field is 5 bits
– The significand field is 8 bits
2.5 Floating-Point Representation
70
• The significand is always preceded by an implied
binary point.
• Thus, the significand always contains a fractional
binary value.
• The exponent indicates the power of 2 by which the
significand is multiplied.
2.5 Floating-Point Representation
71
• Example:
– Express 3210 in the simplified 14-bit floating-point
model.
• We know that 32 is 25. So in (binary) scientific
notation 32 = 1.0 x 25 = 0.1 x 26.
– In a moment, we’ll explain why we prefer the second
notation versus the first.
• Using this information, we put 110 (= 610) in the
exponent field and 1 in the significand as shown.
2.5 Floating-Point Representation
72
• The illustrations shown at
the right are all equivalent
representations for 32
using our simplified
model.
• Not only do these
synonymous
representations waste
space, but they can also
cause confusion.
2.5 Floating-Point Representation
73
• Another problem with our system is that we have
made no allowances for negative exponents. We
have no way to express 0.5 (=2 -1)! (Notice that
there is no sign in the exponent field.)
All of these problems can be fixed with no
changes to our basic model.
2.5 Floating-Point Representation
74
• To resolve the problem of synonymous forms,
we establish a rule that the first digit of the
significand must be 1, with no ones to the left of
the radix point.
• This process, called normalization, results in a
unique pattern for each floating-point number.
– In our simple model, all significands must have the
form 0.1xxxxxxxx
– For example, 4.5 = 100.1 x 20 = 1.001 x 22 = 0.1001 x
23. The last expression is correctly normalized.
In our simple instructional model, we use no implied bits.
2.5 Floating-Point Representation
75
• To provide for negative exponents, we will use a
biased exponent.
– In our case, we have a 5-bit exponent.
– 25-1 – 1 = 24-1 = 15
– Thus will use 15 for our bias: our exponent will use
excess-15 representation.
• In our model, exponent values less than 15 are
negative, representing fractional numbers.
2.5 Floating-Point Representation
76
• Example:
– Express 3210 in the revised 14-bit
floating-point model.
• We know that 32 = 1.0 x 25 = 0.1 x 26.
• To use our excess 15 biased exponent, we add 15 to
6, giving 2110 (=101012).
• So we have:
2.5 Floating-Point Representation
77
• Example:
– Express 0.062510 in the revised 14-bit floating-point
model.
• We know that 0.0625 is 2-4. So in (binary) scientific
notation 0.0625 = 1.0 x 2-4 = 0.1 x 2 -3.
• To use our excess 15 biased exponent, we add 15 to
-3, giving 1210 (=011002).
2.5 Floating-Point Representation
78
• Example:
– Express -26.62510 in the revised 14-bit floating-point
model.
• We find 26.62510 = 11010.1012. Normalizing, we
have: 26.62510 = 0.11010101 x 2
5.
• To use our excess 15 biased exponent, we add 15 to
5, giving 2010 (=101002). We also need a 1 in the sign
bit.
2.5 Floating-Point Representation
79
• The IEEE has established a standard for
floating-point numbers
• The IEEE-754 single precision floating point
standard uses an 8-bit exponent (with a bias of
127) and a 23-bit significand.
• The IEEE-754 double precision standard uses
an 11-bit exponent (with a bias of 1023) and a
52-bit significand.
2.5 Floating-Point Representation
80
• In both the IEEE single-precision and double-
precision floating-point standard, the significant has
an implied 1 to the LEFT of the radix point.
– The format for a significand using the IEEE format is:
1.xxx…
– For example, 4.5 = .1001 x 23 in IEEE format is 4.5 =
1.001 x 22. The 1 is implied, which means is does not need
to be listed in the significand (the significand would
include only 001).
2.5 Floating-Point Representation
81
• Example: Express -3.75 as a floating point number
using IEEE single precision.
• First, let’s normalize according to IEEE rules:
– 3.75 = -11.112 = -1.111 x 2
1
– The bias is 127, so we add 127 + 1 = 128 (this is our
exponent)
– The first 1 in the significand is implied, so we have:
– Since we have an implied 1 in the significand, this equates
to
-(1).1112 x 2
(128 – 127) = -1.1112 x 2
1 = -11.112 = -3.75.
(implied)
2.5 Floating-Point Representation
82
• Using the IEEE-754 single precision floating point
standard:
– An exponent of 255 indicates a special value.
• If the significand is zero, the value is infinity.
• If the significand is nonzero, the value is NaN, “not a
number,” often used to flag an error condition.
• Using the double precision standard:
– The “special” exponent value for a double precision number
is 2047, instead of the 255 used by the single precision
standard.
2.5 Floating-Point Representation
83
• Both the 14-bit model that we have presented
and the IEEE-754 floating point standard allow
two representations for zero.
– Zero is indicated by all zeros in the exponent and the
significand, but the sign bit can be either 0 or 1.
• This is why programmers should avoid testing a
floating-point value for equality to zero.
– Negative zero does not equal positive zero.
2.5 Floating-Point Representation
84
• Floating-point addition and subtraction are done
using methods analogous to how we perform
calculations
using pencil and paper.
• The first thing that we do is express both
operands in the same exponential power, then
add the numbers, preserving the exponent in the
sum.
• If the exponent requires adjustment, we do so at
the end of the calculation.
2.5 Floating-Point Representation
85
• Example:
– Find the sum of 1210 and 1.2510 using the 14-bit “simple”
floating-point model.
• We find 1210 = 0.1100 x 2
4. And 1.2510 = 0.101 x 2
1 =
0.000101 x 2 4.
2.5 Floating-Point Representation
• Thus, our sum is
0.110101 x 2 4.
86
• Floating-point multiplication is also carried out in
a manner akin to how we perform multiplication
using pencil and paper.
• We multiply the two operands and add their
exponents.
• If the exponent requires adjustment, we do so at
the end of the calculation.
2.5 Floating-Point Representation
87
• Example:
– Find the product of 1210 and 1.2510 using the 14-bit
floating-point model.
• We find 1210 = 0.1100 x 2
4. And 1.2510 = 0.101 x 2
1.
• Thus, our product is
0.0111100 x 2 5 =
0.1111 x 2 4.
• The normalized
product requires an
exponent of 1910 =
100112.
2.5 Floating-Point Representation
88
• No matter how many bits we use in a floating-point
representation, our model must be finite.
• The real number system is, of course, infinite, so our
models can give nothing more than an approximation
of a real value.
• At some point, every model breaks down, introducing
errors into
our calculations.
• By using a greater number of bits in our model, we
can reduce these errors, but we can never totally
eliminate them.
2.5 Floating-Point Representation
89
• Our job becomes one of reducing error, or at least
being aware of the possible magnitude of error in
our calculations.
• We must also be aware that errors can compound
through repetitive arithmetic operations.
• For example, our 14-bit model cannot exactly
represent the decimal value 128.5. In binary, it is 9
bits wide:
10000000.12 = 128.510
2.5 Floating-Point Representation
90
• When we try to express 128.510 in our 14-bit model,
we lose the low-order bit, giving a relative error of:
• If we had a procedure that repetitively added 0.5 to
128.5, we would have an error of nearly 2% after only
four iterations.
128.5 – 128
128.5
0.39%
2.5 Floating-Point Representation
91
• Floating-point errors can be reduced when we use
operands that are similar in magnitude.
• If we were repetitively adding 0.5 to 128.5, it
would have been better to iteratively add 0.5 to
itself and then add 128.5 to this sum.
• In this example, the error was caused by loss of
the low-order bit.
• Loss of the high-order bit is more problematic.
2.5 Floating-Point Representation
92
• Floating-point overflow and underflow can cause
programs to crash.
• Overflow occurs when there is no room to store
the high-order bits resulting from a calculation.
• Underflow occurs when a value is too small to
store, possibly resulting in division by zero.
Experienced programmers know that it’s better for a
program to crash than to have it produce incorrect, but
plausible, results.
2.5 Floating-Point Representation
93
• When discussing floating-point numbers, it is
important to understand the terms range,
precision, and accuracy.
• The range of a numeric integer format is the
difference between the largest and smallest
values that can be expressed.
• Accuracy refers to how closely a numeric
representation approximates a true value.
• The precision of a number indicates how much
information we have about a value
2.5 Floating-Point Representation
94
• Most of the time, greater precision leads to better
accuracy, but this is not always true.
– For example, 3.1333 is a value of pi that is accurate to
two digits, but has 5 digits of precision.
• There are other problems with floating point
numbers.
• Because of truncated bits, you cannot always
assume that a particular floating point operation is
commutative or distributive.
2.5 Floating-Point Representation
95
• This means that we cannot assume:
(a + b) + c = a + (b + c) or
a*(b + c) = ab + ac
• Moreover, to test a floating point value for equality to
some other number, it is best to declare a “nearness to x”
epsilon value. For example, instead of checking to see if
floating point x is equal to 2 as follows:
if x = 2 then …
it is better to use:
if (abs(x – 2) < epsilon) then ...
(assuming we have epsilon defined correctly!)
2.5 Floating-Point Representation
96
• Calculations aren’t useful until their results can
be displayed in a manner that is meaningful to
people.
• We also need to store the results of calculations,
and provide a means for data input.
• Thus, human-understandable characters must be
converted to computer-understandable bit
patterns using some sort of character encoding
scheme.
2.6 Character Codes
97
• As computers have evolved, character codes
have evolved.
• Larger computer memories and storage
devices permit richer character codes.
• The earliest computer coding systems used six
bits.
• Binary-coded decimal (BCD) was one of these
early codes. It was used by IBM mainframes in
the 1950s and 1960s.
2.6 Character Codes
98
• In 1964, BCD was extended to an 8-bit code,
Extended Binary-Coded Decimal Interchange
Code (EBCDIC).
• EBCDIC was one of the first widely-used
computer codes that supported upper and
lowercase alphabetic characters, in addition to
special characters, such as punctuation and
control characters.
• EBCDIC and BCD are still in use by IBM
mainframes today.
2.6 Character Codes
99
• Other computer manufacturers chose the 7-bit
ASCII (American Standard Code for Information
Interchange) as a replacement for 6-bit codes.
• While BCD and EBCDIC were based upon
punched card codes, ASCII was based upon
telecommunications (Telex) codes.
• Until recently, ASCII was the dominant
character code outside the IBM mainframe
world.
2.6 Character Codes
100
• Many of today’s systems embrace Unicode, a 16-
bit system that can encode the characters of
every language in the world.
– The Java programming language, and some operating
systems now use Unicode as their default character
code.
• The Unicode codespace is divided into six parts.
The first part is for Western alphabet codes,
including English, Greek, and Russian.
2.6 Character Codes
101
• The Unicode codes-
pace allocation is
shown at the right.
• The lowest-numbered
Unicode characters
comprise the ASCII
code.
• The highest provide for
user-defined codes.
2.6 Character Codes
102
• Computers store data in the form of bits, bytes,
and words using the binary numbering system.
• Hexadecimal numbers are formed using four-bit
groups called nibbles.
• Signed integers can be stored in one’s
complement, two’s complement, or signed
magnitude representation.
• Floating-point numbers are usually coded using
the IEEE 754 floating-point standard.
Chapter 2 Conclusion
103
• Floating-point operations are not necessarily
commutative or distributive.
• Character data is stored using ASCII, EBCDIC,
or Unicode.
Chapter 2 Conclusion
Chapter
4
MARIE: An Introduction
to a Simple Computer
2
Chapter 4 Objectives
• Learn the components common to every modern
computer system.
• Be able to explain how each component
contributes to program execution.
• Understand a simple architecture invented to
illuminate these basic concepts, and how it relates
to some real architectures.
• Know how the program assembly process works.
3
4.1 Introduction
• Chapter 1 presented a general overview of
computer systems.
• In Chapter 2, we discussed how data is stored
and manipulated by various computer system
components.
• Chapter 3 described the fundamental
components of digital circuits.
• Having this background, we can now understand
how computer components work, and how they
fit together to create useful computer systems.
4
4.2 CPU Basics
• The computer’s CPU fetches, decodes, and
executes program instructions.
• The two principal parts of the CPU are the datapath
and the control unit.
– The datapath consists of an arithmetic-logic unit and
storage units (registers) that are interconnected by a data
bus that is also connected to main memory.
– Various CPU components perform sequenced operations
according to signals provided by its control unit.
5
• Registers hold data that can be readily accessed by
the CPU.
• They can be implemented using D flip-flops.
– A 32-bit register requires 32 D flip-flops.
• The arithmetic-logic unit (ALU) carries out logical and
arithmetic operations as directed by the control unit.
• The control unit determines which actions to carry out
according to the values in a program counter register
and a status register.
4.2 CPU Basics
6
4.3 The Bus
• The CPU shares data with other system components
by way of a data bus.
– A bus is a set of wires that simultaneously convey a single
bit along each line.
• Two types of buses are commonly found in computer
systems: point-to-point, and multipoint buses.
These are point-to-
point buses:
7
• Buses consist of data lines, control lines, and
address lines.
• While the data lines convey bits from one device to
another, control lines determine the direction of data
flow, and when each device can access the bus.
• Address lines determine the location of the source
or destination of the data.
The next slide shows a model bus configuration.
4.3 The Bus
8
4.3 The Bus
9
• A multipoint bus is shown below.
• Because a multipoint bus is a shared resource,
access to it is controlled through protocols, which
are built into the hardware.
4.3 The Bus
10
– Distributed using self-detection:
Devices decide which gets the bus
among themselves.
– Distributed using collision-
detection: Any device can try to
use the bus. If its data collides
with the data of another device,
it tries again.
– Daisy chain: Permissions
are passed from the highest-
priority device to the
lowest.
– Centralized parallel: Each
device is directly connected
to an arbitration circuit.
• In a master-slave configuration, where more than
one device can be the bus master, concurrent
bus master requests must be arbitrated.
• Four categories of bus arbitration are:
4.3 The Bus
11
4.4 Clocks
• Every computer contains at least one clock that
synchronizes the activities of its components.
• A fixed number of clock cycles are required to carry
out each data movement or computational operation.
• The clock frequency, measured in megahertz or
gigahertz, determines the speed with which all
operations are carried out.
• Clock cycle time is the reciprocal of clock frequency.
– An 800 MHz clock has a cycle time of 1.25 ns.
12
• Clock speed should not be confused with CPU
performance.
• The CPU time required to run a program is given by
the general performance equation:
– We see that we can improve CPU throughput when we
reduce the number of instructions in a program, reduce the
number of cycles per instruction, or reduce the number of
nanoseconds per clock cycle.
We will return to this important equation in later chapters.
4.4 Clocks
13
4.5 The Input/Output Subsystem
• A computer communicates with the outside world
through its input/output (I/O) subsystem.
• I/O devices connect to the CPU through various
interfaces.
• I/O can be memory-mapped– where the I/O device
behaves like main memory from the CPU’s point of
view.
• Or I/O can be instruction-based, where the CPU has
a specialized I/O instruction set.
We study I/O in detail in chapter 7.
14
4.6 Memory Organization
• Computer memory consists of a linear array of
addressable storage cells that are similar to registers.
• Memory can be byte-addressable, or word-
addressable, where a word typically consists of two or
more bytes.
• Memory is constructed of RAM chips, often referred to
in terms of length width.
• If the memory word size of the machine is 16 bits,
then a 4M 16 RAM chip gives us 4 megabytes of
16-bit memory locations.
15
• How does the computer access a memory location
corresponds to a particular address?
• We observe that 4M can be expressed as 2 2 2 20 =
2 22 words.
• The memory locations for this memory are numbered
0 through 2 22 -1.
• Thus, the memory bus of this system requires at
least 22 address lines.
– The address lines “count” from 0 to 222 – 1 in binary. Each
line is either “on” or “off” indicating the location of the
desired memory element.
4.6 Memory Organization
16
• Physical memory usually consists of more than one
RAM chip.
• Access is more efficient when memory is organized
into banks of chips with the addresses interleaved
across the chips
• With low-order interleaving, the low order bits of the
address specify which memory bank contains the
address of interest.
• Accordingly, in high-order interleaving, the high order
address bits specify the memory bank.
The next two slides illustrate these two ideas.
4.6 Memory Organization
17
4.6 Memory Organization
• Example: Suppose we have a memory consisting of
16 2K x 8 bit chips.
– Memory is 32K = 25 210 = 215
– 15 bits are needed for each
address.
– We need 4 bits to select the chip,
and 11 bits for the offset into the
chip that selects the byte.
18
4.6 Memory Organization
• In high-order interleaving the high-order
4 bits select the chip.
• In low-order interleaving the low-order
4 bits select the chip.
19
4.6 Memory Organization
20
4.6 Memory Organization
21
4.6 Memory Organization
• EXAMPLE 4.1 Suppose we have a 128-word
memory that is 8-way low-order interleaved
– which means it uses 8 memory banks; 8 = 23
• So we use the low-order 3 bits to identify the bank.
• Because we have 128 words, we need 7 bits for
each address (128 = 2 7 ).
22
4.7 Interrupts
• The normal execution of a program is altered when
an event of higher-priority occurs. The CPU is
alerted to such an event through an interrupt.
• Interrupts can be triggered by I/O requests,
arithmetic errors (such as division by zero), or when
an invalid instruction is encountered.
• Each interrupt is associated with a procedure that
directs the actions of the CPU when an interrupt
occurs.
– Nonmaskable interrupts are high-priority interrupts that
cannot be ignored.
Chapter 1
Introduction
2
Chapter 1 Objectives
• Know the difference between computer organization
and computer architecture.
• Understand units of measure common to computer
systems.
• Appreciate the evolution of computers.
• Understand the computer as a layered system.
• Be able to explain the von Neumann architecture and
the function of basic computer components.
3
Why study computer organization and
architecture?
– Design better programs, including system
software such as compilers, operating systems,
and device drivers.
– Optimize program behavior.
– Evaluate (benchmark) computer system
performance.
– Understand time, space, and price tradeoffs.
1.1 Overview
4
1.1 Overview
• Computer organization
– Encompasses all physical aspects of computer
systems.
– E.g., circuit design, control signals, memory types.
– How does a computer work?
• Computer architecture
– Logical aspects of system implementation as seen by the
programmer.
– E.g., instruction sets, instruction formats, data types,
addressing modes.
– How do I design a computer?
5
1.2 Computer Components
• There is no clear distinction between matters
related to computer organization and matters
relevant to computer architecture.
• Principle of Equivalence of Hardware and
Software:
– Any task done by software can also be done using
hardware, and any operation performed directly by
hardware can be done using software.*
* Assuming speed is not a concern.
6
• At the most basic level, a computer is a
device consisting of three pieces:
– A processor to interpret and execute programs
– A memory to store both data and programs
– A mechanism for transferring data to and from
the outside world.
1.2 Computer Components
7
1.3 An Example System
What does it all mean??
Consider this advertisement:
8
Measures of capacity and speed:
• Kilo- (K) = 1 thousand = 103 and 2
10
• Mega- (M) = 1 million = 106 and 2
20
• Giga- (G) = 1 billion = 109 and 2
30
• Tera- (T) = 1 trillion = 1012 and 2
40
• Peta- (P) = 1 quadrillion = 1015 and 2
50
• Exa- (E) = 1 quintillion = 1018 and 260
• Zetta- (Z) = 1 sextillion = 1021 and 270
• Yotta- (Y) = 1 septillion = 1024 and 280
1.3 An Example System
Whether a metric refers to a power of ten or a power of
two typically depends upon what is being measured.
9
• Hertz = clock cycles per second (frequency)
– 1MHz = 1,000,000Hz
– Processor speeds are measured in MHz or GHz.
• Byte = a unit of storage
– 1KB = 210 = 1024 Bytes
– 1MB = 220 = 1,048,576 Bytes
– 1GB = 230 = 1,099,511,627,776 Bytes
– Main memory (RAM) is measured in GB
– Disk storage is measured in GB for small systems, TB
(240) for large systems.
1.3 An Example System
10
1.3 An Example System
Measures of time and space:
• Milli- (m) = 1 thousandth = 10 -3
• Micro- () = 1 millionth = 10 -6
• Nano- (n) = 1 billionth = 10 -9
• Pico- (p) = 1 trillionth = 10 –
12
• Femto- (f) = 1 quadrillionth = 10 –
15
• Atto- (a) = 1 quintillionth = 10 –
18
• Zepto- (z) = 1 sextillionth = 10 –
21
• Yocto- (y) = 1 septillionth = 10 -24
11
• Millisecond = 1 thousandth of a second
– Hard disk drive access times are often 10 to 20 milliseconds.
• Nanosecond = 1 billionth of a second
– Main memory access times are often 50 to 70 nanoseconds.
• Micron (micrometer) = 1 millionth of a meter
– Circuits on computer chips are measured in microns.
1.3 An Example System
12
• We note that cycle time is the reciprocal of clock
frequency.
• A bus operating at 133MHz has a cycle time of
7.52 nanoseconds:
1.3 An Example System
Now back to the advertisement …
133,000,000 cycles/second = 7.52ns/cycle
13
1.3 An Example System
The microprocessor is the
“brain” of the system. It
executes program instructions.
This one is an Intel i7 running at
3.9GHz.
14
1.3 An Example System
• Computers with large main memory capacity can
run larger programs with greater speed than
computers having small memories.
• RAM is an acronym for random access memory.
Random access means that memory contents
can be accessed directly if you know its location.
• Cache is a type of temporary memory that can be
accessed faster than RAM.
15
1.3 An Example System
… and two levels of cache memory, the level 1 (L1)
cache is smaller and (probably) faster than the L2 cache.
Note that these cache sizes are measured in KB and MB.
This system has 32GB of (fast)
synchronous dynamic RAM
(SDRAM) . . .
16
1.3 An Example System
This one can store 1TB. 7200 RPM is the rotational
speed of the disk. Generally, the faster a disk rotates,
the faster it can deliver data to RAM. (There are
many other factors involved.)
Hard disk capacity determines
the amount of data and size of
programs you can store.
17
1.3 An Example System
ATA stands for advanced technology attachment, which
describes how the hard disk interfaces with (or
connects to) other system components.
A DVD can store about
4.7GB of data. This drive
supports rewritable
DVDs, +/-RW, that can be
written to many times..
16x describes its speed.
18
1.3 An Example System
This system has
ten ports.
Ports allow movement of data
between a system and its external
devices.
19
1.3 An Example System
• Serial ports send data as a series of pulses along
one or two data lines.
• Parallel ports send data as a single pulse along
at least eight data lines.
• USB, Universal Serial Bus, is an intelligent serial
interface that is self-configuring. (It supports
“plug and play.”)
20
1.3 An Example System
System buses can be augmented by
dedicated I/O buses. PCI, peripheral
component interface, is one such bus.
This system has two PCIe
(PCI express) devices: a video
card and a sound card.
21
1.3 An Example System
Active matrix technology uses one transistor per picture
element (pixel). The resolution of a monitor determines the
amount of text and graphics that the monitor can display.
Super VGA (SVGA) tells us
this monitor has a resolution
of 1280 × 1024 pixels.
The video card contains memory
and programs that support the
monitor.
22
Throughout the remainder of the book you will
see how these components work and how they
interact with software to make complete
computer systems.
This statement raises two important questions:
What assurance do we have that computer
components will operate as we expect?
And what assurance do we have that
computer components will operate together?
1.3 An Example System
23
• There are many organizations that set
computer hardware standards– to include
the interoperability of computer components.
• Throughout this book, and in your career,
you will encounter many of them.
• Some of the most important standards-
setting groups are . . .
1.4 Standards Organizations
24
• The Institute of Electrical and Electronic
Engineers (IEEE)
– Promotes the interests of the worldwide
electrical engineering community.
– Establishes standards for computer components,
data representation, and signaling protocols,
among many other things.
1.4 Standards Organizations
25
• The International Telecommunications Union
(ITU)
– Concerns itself with the interoperability of
telecommunications systems, including data
communications and telephony.
• National groups establish standards within their
respective countries:
– The American National Standards Institute (ANSI)
– The British Standards Institution (BSI)
1.4 Standards Organizations
26
• The International Organization for
Standardization (ISO)
– Establishes worldwide standards for everything
from screw threads to photographic film.
– Is influential in formulating standards for
computer hardware and software, including their
methods of manufacture.
Note: ISO is not an acronym. ISO comes from the Greek,
isos, meaning “equal.”
1.4 Standards Organizations
27
• To fully appreciate the computers of today, it is
helpful to understand how things got the way they
are.
• The evolution of computing machinery has taken
place over several centuries.
• In modern times computer evolution is usually
classified into four generations according to the
salient technology of the era.
We note that many of the following dates are approximate.
1.5 Historical Development
28
• Generation Zero: Mechanical Calculating Machines
(1642 – 1945)
– Calculating Clock – Wilhelm Schickard (1592 – 1635).
– Pascaline – Blaise Pascal (1623 – 1662).
– Difference Engine – Charles Babbage (1791 – 1871), also
designed but never built the Analytical Engine.
– Punched card tabulating machines – Herman Hollerith
(1860 – 1929).
Hollerith cards were commonly used for
computer input well into the 1970s.
1.5 Historical Development
29
• The First Generation: Vacuum Tube Computers (1945 –
1953)
– Atanasoff Berry Computer (1937 –
1938) solved systems of linear
equations.
– John Atanasoff and Clifford Berry of
Iowa State University.
1.5 Historical Development
30
• The First Generation: Vacuum Tube Computers
(1945 – 1953)
– Electronic Numerical Integrator and
Computer (ENIAC)
– John Mauchly and J. Presper Eckert
– University of Pennsylvania, 19
46
• The ENIAC was the first general-purpose
computer.
1.5 Historical Development
31
• The First Generation: Vacuum Tube Computers
(1945 – 1953)
– The IBM 650 first mass-produced computer. (1955)
° It was phased out in 1969.
– Other major computer manufacturers of this period
include UNIVAC, Engineering Research Associates
(ERA), and Computer Research Corporation (CRC).
° UNIVAC and ERA were bought by Remington Rand, the
ancestor of the Unisys Corporation.
° CRC was bought by the Underwood (typewriter)
Corporation, which left the computer business.
1.5 Historical Development
32
• The Second Generation: Transistorized
Computers (1954 – 1965)
– IBM 7094 (scientific) and 1401 (business)
– Digital Equipment Corporation (DEC) PDP-1
– Univac 1100
– Control Data Corporation 1604.
– . . . and many others.
1.5 Historical Development
These systems had few architectural similarities.
33
• The Third Generation: Integrated Circuit
Computers (1965 – 1980)
– IBM 360
– DEC PDP-8 and PDP-11
– Cray-1 supercomputer
– . . . and many others.
• By this time, IBM had gained overwhelming
dominance in the industry.
– Computer manufacturers of this era were characterized
as IBM and the BUNCH (Burroughs, Unisys, NCR,
Control Data, and Honeywell).
1.5 Historical Development
34
• The Fourth Generation: VLSI Computers
(1980 – ????)
– Very large scale integrated circuits (VLSI) have more
than 10,000 components per chip.
– Enabled the creation of microprocessors.
– The first was the 4-bit Intel 4004.
– Later versions, such as the 8080, 8086, and 8088
spawned the idea of “personal computing.”
1.5 Historical Development
35
• Moore’s Law (1965)
– Gordon Moore, Intel founder
– “The density of transistors in an integrated circuit will
double every year.”
• Contemporary version:
– “The density of silicon chips doubles every 18 months.”
But this “law” cannot hold forever …
1.5 Historical Development
36
• Rock’s Law
– Arthur Rock, Intel financier
– “The cost of capital equipment to build semiconductors
will double every four years.”
– In 1968, a new chip plant cost about $12,000.
At the time, $12,000 would buy a nice home in
the suburbs.
An executive earning $12,000 per year was
“making a very comfortable living.”
1.5 Historical Development
37
• Rock’s Law
– In 2012, a chip plants under construction cost well
over $5 billion.
– For Moore’s Law to hold, Rock’s Law must fall, or
vice versa. But no one can say which will give out
first.
$5 billion is more than the gross domestic
product of some small countries, including
Barbados, Mauritania, and Rwanda.
1.5 Historical Development
38
• Computers consist of many things besides
chips.
• Before a computer can do anything worthwhile,
it must also use software.
• Writing complex programs requires a “divide
and conquer” approach, where each program
module solves a smaller problem.
• Complex computer systems employ a similar
technique through a series of virtual machine
layers.
1.6 The Computer Level Hierarchy
39
• Each virtual machine layer
is an abstraction of the level
below it.
• The machines at each level
execute their own particular
instructions, calling upon
machines at lower levels to
perform tasks as required.
• Computer circuits ultimately
carry out the work.
1.6 The Computer Level Hierarchy
40
• Level 6: The User Level
– Program execution and user interface level.
– The level with which we are most familiar.
• Level 5: High-Level Language Level
– The level with which we interact when we write
programs in languages such as C, Pascal, Lisp, and
Java.
1.6 The Computer Level Hierarchy
41
• Level 4: Assembly Language Level
– Acts upon assembly language produced from
Level 5, as well as instructions programmed
directly at this level.
• Level 3: System Software Level
– Controls executing processes on the system.
– Protects system resources.
– Assembly language instructions often pass
through Level 3 without modification.
1.6 The Computer Level Hierarchy
42
• Level 2: Machine Level
– Also known as the Instruction Set Architecture
(ISA) Level.
– Consists of instructions that are particular to the
architecture of the machine.
– Programs written in machine language need no
compilers, interpreters, or assemblers.
1.6 The Computer Level Hierarchy
43
• Level 1: Control Level
– A control unit decodes and executes instructions
and moves data through the system.
– Control units can be microprogrammed or
hardwired.
– A microprogram is a program written in a low-
level language that is implemented by the
hardware.
– Hardwired control units consist of hardware that
directly executes machine instructions.
1.6 The Computer Level Hierarchy
44
• Level 0: Digital Logic Level
– This level is where we find digital circuits (the
chips).
– Digital circuits consist of gates and wires.
– These components implement the mathematical
logic of all other levels.
1.6 The Computer Level Hierarchy
45
• On the ENIAC, all programming was done at
the digital logic level.
• Programming the computer involved moving
plugs and wires.
• A different hardware configuration was needed
to solve every unique problem type.
1.8 The von Neumann Model
Configuring the ENIAC to solve a “simple” problem
required many days labor by skilled technicians.
46
• Inventors of the ENIAC, John Mauchley and
J. Presper Eckert, conceived of a computer
that could store instructions in memory.
• The invention of this idea has since been
ascribed to a mathematician, John von
Neumann, who was a contemporary of
Mauchley and Eckert.
• Stored-program computers have become
known as von Neumann Architecture systems.
1.8 The von Neumann Model
47
• Today’s stored-program computers have the
following characteristics:
– Three hardware systems:
• A central processing unit (CPU)
• A main memory system
• An I/O system
– The capacity to carry out sequential instruction
processing.
– A single data path between the CPU and main memory.
• This single path is known as the von Neumann
bottleneck.
1.8 The von Neumann Model
48
• This is a general
depiction of a von
Neumann system:
• These computers
employ a fetch-
decode-execute
cycle to run
programs as
follows . . .
1.8 The von Neumann Model
49
• The control unit fetches the next instruction from memory using
the program counter to determine where the instruction is
located.
1.8 The von Neumann Model
50
• The instruction is decoded into a language that the ALU
can understand.
1.8 The von Neumann Model
51
• Any data operands required to execute the instruction
are fetched from memory and placed into registers
within the CPU.
1.8 The von Neumann Model
52
• The ALU executes the instruction and places results in
registers or memory.
1.8 The von Neumann Model
53
• This chapter has given you an overview of the
subject of computer architecture.
• You should now be sufficiently familiar with
general system structure to guide your studies
throughout the remainder of this course.
• Subsequent chapters will explore many of these
topics in great detail.
Conclusion