Guide Computer Organisation and Architecture: An Introduction

Free download. Book file PDF easily for everyone and every device. You can download and read online Computer Organisation and Architecture: An Introduction file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Computer Organisation and Architecture: An Introduction book. Happy reading Computer Organisation and Architecture: An Introduction Bookeveryone. Download file Free Book PDF Computer Organisation and Architecture: An Introduction at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Computer Organisation and Architecture: An Introduction Pocket Guide.

Basic functional blocks of a computer: CPU, memory, input-output subsystems, control unit. Data representation: signed number representation, fixed and floating point representations, character representation. Computer arithmetic - integer addition and subtraction, ripple carry adder, carry look-ahead adder, etc.

The first Generation: Vacuum Tubes

Division - restoring and non-restoring techniques, floating point arithmetic. Instruction set architecture of a CPU - registers, instruction execution cycle, RTL interpretation of instructions, addressing modes, instruction set. Case study - instruction sets of a generic CPU. CPU control unit design: hardwired and micro-programmed design approaches, Case study - design of a control unit of a simple hypothetical CPU. Memory system design: semiconductor memory technologies, memory organization: Memory interleaving, concept of hierarchical memory organization, cache memory, cache size vs.

Introduction to concepts focusing on enhancing performance of processors i. Read more.

What is Computer Architecture and Organization?

Get personalized course recommendations, track subjects and courses with reminders, and more. Home Subjects Computer Science. Facebook Twitter Envelope Url. These networks connect inexpensive, powerful desktop machines to form unequaled computing power. Local area networks LAN of powerful personal computers and workstations began to replace mainframes and minis by These individual desktop computers were soon to be connected into larger complexes of computing by wide area networks WAN.

The pervasiveness of the Internet created interest in network computing and more recently in grid computing. Grids are geographically distributed platforms of com- putation. They should provide dependable, consistent, pervasive, and inexpensive access to high-end computational facilities. Table 1. In this table, major characteristics of the different computing paradigms are associated with each decade of computing, starting from This has taken a number of forms. Among these is the philosophy that by doing more in a single instruction, one can use a smaller number of instructions to perform the same job.

Introduction to Information Technology, 2nd Edition by ITL Limited ITL Education Solutions Limited

A single machine instruction to convert several binary coded decimal BCD numbers to binary is an example for how complex some instructions were intended to be. The huge number of addressing modes considered more than 20 in the VAX machine further adds to the complexity of instructions. Machines following this philosophy have been referred to as complex instructions set computers CISCs. This philosophy promotes the optimization of architectures by speeding up those operations that are most frequently used while reducing the instruction complexities and the number of addressing modes.

Machines following this philosophy have been referred to as reduced instructions set computers RISCs. The majority of contemporary microprocessor chips seems to follow the RISC paradigm. This includes the development of processors and memories.

Helping Teachers to Teach and Students to Learn

Indeed, it is the advances in technology that have fueled the computer industry. This impressive increase has been made possible by the advances in the fabrication technology of transistors. It should be mentioned that the continuous decrease in the minimum devices feature size has led to a continuous increase in the number of devices per chip, TABLE 1. Among these is the increase in the number of devices in RAM memories, which in turn helps designers to trade off memory size for speed.

The improvement in the feature size provides golden oppor- tunities for introducing improved design styles. In particular, we focus our discussion on a number of performance measures that are used to assess computers. Let us admit at the outset that there are various facets to the performance of a computer. For example, a user of a computer measures its performance based on the time taken to execute a given job program. On the other hand, a laboratory engineer measures the performance of his system by the total amount of work done in a given time.

While the user considers the program execution time a measure for performance, the laboratory engineer considers the throughput a more important measure for performance. A metric for assessing the performance of a computer helps comparing alternative designs.

Performance analysis should help answering questions such as how fast can a program be executed using a given computer? In order to answer such a question, we need to determine the time taken by a computer to execute a given job. Clock cycles allow counting unit compu- tations, because the storage of computation results is synchronized with rising trail- ing clock edges. The time required to execute a job by a computer is often expressed in terms of clock cycles. Therefore, the average number of clock cycles per instruction CPI has been used as an alternate performance measure.

CS-224 Computer Organization Lecture 01

The following equation shows how to compute the CPI. Example Consider computing the overall CPI for a machine A for which the following performance measures were recorded when executing a set of benchmark programs. Instruction Percentage of No. This shows the degree of interdependence between the two performance parameters. It is interesting to note here that although MIPS has been used as a performance measure for machines, one has to be careful in using it to compare machines having different instruction sets.

This is because MIPS does not track execution time. Consider, for example, the following measurement made on two different machines running a given set of benchmark programs. Yet another argument is the fact that the performance of a machine for a given program as measured by MFLOPS cannot be generalized to provide a single performance metric for that machine.

The performance of a machine regarding one particular program might not be interesting to a broad audience. The use of arithmetic and geometric means are the most popular ways to summarize performance regarding larger sets of programs e. The following table shows an example for computing these metrics. In this case, we consider speedup as a measure of how a machine performs after some enhancement relative to its original performance.

However, sometimes it may be possible to achieve performance enhancement for only a fraction of time, D. In this case a new formula has to be developed in order to relate the speedup, SUD due to an enhance- ment for a fraction of time D to the speedup due to an overall enhancement, SUo. Consider, for example, a machine for which a speedup of 30 is possible after applying an enhancement.

This was followed by a brief discussion on the technological development and its impact on computing performance. Our coverage in this chapter was concluded with a detailed treatment of the issues involved in assessing the per- formance of computers. What has been the trend in computing from the following points of view? Given the trend in computing in the last 20 years, what are your predictions for the future of computing? Find the meaning of the following: a Cluster computing b Grid computing c Quantum computing d Nanotechnology 4.

Assume that a switching component such as a transistor can switch in zero time.

We propose to construct a disk-shaped computer chip with such a com- ponent. The only limitation is the time it takes to send electronic signals from one edge of the chip to the other. Make the simplifying assumption that elec- tronic signals can travel at , kilometers per second. What is the limit- ation on the diameter of a round chip so that any computation result can by used anywhere on the chip at a clock rate of 1 GHz? Is such a chip feasible? Compare uniprocessor systems with multiprocessor systems in the following aspects: a Ease of programming b The need for synchronization c Performance evaluation d Run time system 6.

Quick Links

Consider having a program that runs in 50 s on computer A, which has a MHz clock. We would like to run the same program on another machine, B, in 20 s. If machine B requires 2. Suppose that we have two implementations of the same instruction set archi- tecture. Machine A has a clock cycle time of 50 ns and a CPI of 4. Which machine is faster and by how much? A compiler designer is trying to decide between two code sequences for a particular machine. The hardware designers have supplied the following facts: Instruction CPI of the class instruction class A 1 B 3 C 4 For a particular high-level language, the compiler writer is considering two sequences that require the following instruction counts: Instruction counts in millions Code sequence A B C 1 2 1 2 2 4 3 1 What is the CPI for each sequence?

Which code sequence is faster? By how much?

Computer Organisation and Architecture -- Supplementary Material

Which code sequence will execute faster according to MIPS? And according to execution time? If only one enhancement can be implemented, which should be chosen to maximize the speedup? If two enhancements can be implemented, which should be chosen, to maximize the speedup? Gajski, V. Milutinovic, H. Siegel, and B. Giloi, Towards a taxonomy of computer architecture based on the machine data type view, Proceedings of the 10th International Symposium on Computer Architecture, 6 — 15, Hennessy and D.

Hwang and F.

Briggs, Computer Architecture and Parallel Processing, 2nd ed. Tesler, Networked computing in the s, reprinted from the Sept. Treleaven, Control-driven data-driven and demand-driven computer architecture abstract , Parallel Comput. Treleaven, D. Brownbridge, and R. Our discussion starts with a consideration of memory locations and addresses.