1. Bækur & afþreying
  2. Bækur og kort
  3. Rafbækur
  4. Fræði- og handbækur
  5. Tölvunarfræði

Computer Organization and Design: The Hardware/Software Interface

Rafræn bók. Uppl. sendar á netfangið þitt eftir kaup
Rafbók til eignar. Rafbók til eignar þarf að hlaða niður á þau tæki sem þú vilt nota innan eins árs frá því bókin er keypt. Útgáfa: 5
Get the product now

Efnisyfirlit

  • Computer Organization and Design
  • Copyright Page
  • Acknowledgments
  • Contents
  • Preface
    • About This Book
    • About the Other Book
    • Changes for the Fifth Edition
    • Changes for the Fifth Edition
    • Concluding Remarks
    • Acknowledgments for the Fifth Edition
  • 1 Computer Abstractions and Technology
    • 1.1 Introduction
      • Classes of Computing Applications and Their Characteristics
      • Welcome to the PostPC Era
      • What You Can Learn in This Book
    • 1.2 Eight Great Ideas in Computer Architecture
      • Design for Moore’s Law
      • Use Abstraction to Simplify Design
      • Make the Common Case Fast
      • Performance via Parallelism
      • Performance via Pipelining
      • Performance via Prediction
      • Hierarchy of Memories
      • Dependability via Redundancy
    • 1.3 Below Your Program
      • From a High-Level Language to the Language of Hardware
    • 1.4 Under the Covers
      • Through the Looking Glass
      • Touchscreen
      • Opening the Box
      • A Safe Place for Data
      • Communicating with Other Computers
    • 1.5 Technologies for Building Processors and Memory
    • 1.6 Performance
      • Defining Performance
      • Measuring Performance
      • CPU Performance and Its Factors
      • Instruction Performance
      • The Classic CPU Performance Equation
    • 1.7 The Power Wall
    • 1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors
    • 1.9 Real Stuff: Benchmarking the Intel Core i7
      • SPEC CPU Benchmark
      • SPEC Power Benchmark
    • 1.10 Fallacies and Pitfalls
    • 1.11 Concluding Remarks
      • Road Map for This Book
    • 1.12 Historical Perspective and Further Reading
    • 1.13 Exercises
  • 2 Instructions: Language of the Computer
    • 2.1 Introduction
    • 2.2 Operations of the Computer Hardware
    • 2.3 Operands of the Computer Hardware
      • Memory Operands
      • Constant or Immediate Operands
    • 2.4 Signed and Unsigned Numbers
      • Summary
    • 2.5 Representing Instructions in the Computer
      • MIPS Fields
    • 2.6 Logical Operations
    • 2.7 Instructions for Making Decisions
      • Loops
      • Case/Switch Statement
    • 2.8 Supporting Procedures in Computer Hardware
      • Using More Registers
      • Nested Procedures
      • Allocating Space for New Data on the Stack
      • Allocating Space for New Data on the Heap
    • 2.9 Communicating with People
      • Characters and Strings in Java
    • 2.10 MIPS Addressing for 32-bit Immediates and Addresses
      • 32-Bit Immediate Operands
      • Addressing in Branches and Jumps
      • MIPS Addressing Mode Summary
      • Decoding Machine Language
    • 2.11 Parallelism and Instructions: Synchronization
    • 2.12 Translating and Starting a Program
      • Compiler
      • Assembler
      • Linker
      • Loader
      • Dynamically Linked Libraries
      • Starting a Java Program
    • 2.13 A C Sort Example to Put It All Together
      • The Procedure swap
      • Register Allocation for swap
      • Code for the Body of the Procedure swap
      • The Full swap Procedure
      • The Procedure sort
      • Register Allocation for sort
      • Code for the Body of the Procedure sort
      • The Procedure Call in sort
      • Passing Parameters in sort
      • Preserving Registers in sort
      • The Full Procedure sort
    • 2.14 Arrays versus Pointers
      • Array Version of Clear
      • Pointer Version of Clear
      • Comparing the Two Versions of Clear
    • 2.15 Advanced Material: Compiling C and Interpreting Java
    • 2.16 Real Stuff: ARMv7 (32-bit) Instructions
      • Addressing Modes
      • Compare and Conditional Branch
      • Unique Features of ARM
    • 2.17 Real Stuff: x86 Instructions
      • Evolution of the Intel x86
      • x86 Registers and Data Addressing Modes
      • x86 Integer Operations
      • x86 Instruction Encoding
      • x86 Conclusion
    • 2.18 Real Stuff: ARMv8 (64-bit) Instructions
    • 2.19 Fallacies and Pitfalls
    • 2.20 Concluding Remarks
    • 2.21 Historical Perspective and Further Reading
    • 2.22 Exercises
  • 3 Arithmetic for Computers
    • 3.1 Introduction
    • 3.2 Addition and Subtraction
      • Summary
    • 3.3 Multiplication
      • Sequential Version of the Multiplication Algorithm and Hardware
      • Signed Multiplication
      • Faster Multiplication
      • Multiply in MIPS
      • Summary
    • 3.4 Division
      • A Division Algorithm and Hardware
      • Signed Division
      • Faster Division
      • Divide in MIPS
      • Summary
    • 3.5 Floating Point
      • Floating-Point Representation
      • Floating-Point Addition
      • Floating-Point Multiplication
      • Floating-Point Instructions in MIPS
      • Accurate Arithmetic
      • Summary
    • 3.6 Parallelism and Computer Arithmetic: Subword Parallelism
    • 3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86
    • 3.8 Going Faster: Subword Parallelism and Matrix Multiply
    • 3.9 Fallacies and Pitfalls
    • 3.10 Concluding Remarks
    • 3.11 Historical Perspective and Further Reading
    • 3.12 Exercises
  • 4 The Processor
    • 4.1 Introduction
      • A Basic MIPS Implementation
        • An Overview of the Implementation
        • Clocking Methodology
    • 4.2 Logic Design Conventions
    • 4.3 Building a Datapath
      • Creating a Single Datapath
    • 4.4 A Simple Implementation Scheme
      • The ALU Control
      • Designing the Main Control Unit
        • Operation of the Datapath
        • Finalizing Control
      • Why a Single-Cycle Implementation Is Not Used Today
    • 4.5 An Overview of Pipelining
      • Designing Instruction Sets for Pipelining
      • Pipeline Hazards
        • Hazards
        • Data Hazards
        • Control Hazards
      • Pipeline Overview Summary
    • 4.6 Pipelined Datapath and Control
      • Graphically Representing Pipelines
      • Pipelined Control
    • 4.7 Data Hazards: Forwarding versus Stalling
      • Data Hazards and Stalls
    • 4.8 Control Hazards
      • Assume Branch Not Taken
      • Reducing the Delay of Branches
      • Dynamic Branch Prediction
      • Pipeline Summary
    • 4.9 Exceptions
      • How Exceptions Are Handled in the MIPS Architecture
      • Exceptions in a Pipelined Implementation
    • 4.10 Parallelism via Instructions
      • The Concept of Speculation
      • Static Multiple Issue
        • An Example: Static Multiple Issue with the MIPS ISA
      • Dynamic Multiple-Issue Processors
        • Dynamic Pipeline Scheduling
      • Energy Efficiency and Advanced Pipelining
    • 4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines
      • The ARM Cortex-A8
      • The Intel Core i7 920
      • Performance of the Intel Core i7 920
    • 4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply
    • 4.13 Advanced Topic: an Introduction to Digital Design Using a Hardware Design Language to Describe
    • 4.14 Fallacies and Pitfalls
    • 4.15 Concluding Remarks
    • 4.16 Historical Perspective and Further Reading
    • 4.17 Exercises
  • 5 Large and Fast: Exploiting Memory Hierarchy
    • 5.1 Introduction
    • 5.2 Memory Technologies
      • SRAM Technology
      • DRAM Technology
      • Flash Memory
      • Disk Memory
    • 5.3 The Basics of Caches
      • Accessing a Cache
      • Handling Cache Misses
      • Handling Writes
      • An Example Cache: The Intrinsity FastMATH Processor
      • Summary
    • 5.4 Measuring and Improving Cache Performance
      • Reducing Cache Misses by More Flexible Placement of Blocks
      • Locating a Block in the Cache
      • Choosing Which Block to Replace
      • Reducing the Miss Penalty Using Multilevel Caches
      • Software Optimization via Blocking
      • Summary
    • 5.5 Dependable Memory Hierarchy
      • Defining Failure
      • The Hamming Single Error Correcting, Double Error Detecting Code (SEC/DED)
    • 5.6 Virtual Machines
      • Requirements of a Virtual Machine Monitor
      • (Lack of) Instruction Set Architecture Support for Virtual Machines
      • Protection and Instruction Set Architecture
    • 5.7 Virtual Memory
      • Placing a Page and Finding it Again
      • Page Faults
      • What about Writes?
      • Making Address Translation Fast: the TLB
        • The Intrinsity FastMATH TLB
      • Integrating Virtual Memory, TLBs, and Caches
      • Implementing Protection with Virtual Memory
      • Handling TLB Misses and Page Faults
      • Summary
    • 5.8 A Common Framework for Memory Hierarchy
      • Question 1: Where Can a Block Be Placed?
      • Question 2: How is a Block Found?
      • Question 3: Which Block Should Be Replaced on a Cache Miss?
      • Question 4: What Happens on a Write?
      • The Three Cs: An Intuitive Model for Understanding the Behavior of Memory Hierarchies
    • 5.9 Using a Finite-State Machine to Control a Simple Cache
      • A Simple Cache
      • Finite-State Machines
      • FSM for a Simple Cache Controller
    • 5.10 Parallelism and Memory Hierarchy: Cache Coherence
      • Basic Schemes for Enforcing Coherence
      • Snooping Protocols
    • 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks
    • 5.12 Advanced Material: Implementing Cache Controllers
    • 5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies
      • Performance of the A8 and Core i7 Memory Hierarchies
    • 5.14 Going Faster: Cache Blocking and Matrix Multiply
    • 5.15 Fallacies and Pitfalls
    • 5.16 Concluding Remarks
    • 5.17 Historica Perspective and Further Reading
    • 5.18 Exercises
  • 6 Parallel Processors from Client to Cloud
    • 6.1 Introduction
    • 6.2 The Difficulty of Creating Parallel Processing Programs
    • 6.3 SISD, MIMD, SIMD, SPMD, and Vector
      • SIMD in x86: Multimedia Extensions
      • Vector
      • Vector versus Scalar
      • Vector versus Multimedia Extensions
    • 6.4 Hardware Multithreading
    • 6.5 Multicore and Other Shared Memory Multiprocessors
    • 6.6 Introduction to Graphics Processing Units
      • An Introduction to the NVIDIA GPU Architecture
      • NVIDIA GPU Memory Structures
      • Putting GPUs into Perspective
    • 6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors
      • Warehouse-Scale Computers
    • 6.8 Introduction to Multiprocessor Network Topologies
      • Implementing Network Topologies
    • 6.9 Communicating to the Outside World: Cluster Networking
    • 6.10 Multiprocessor Benchmarks and Performance Models
      • Performance Models
      • The Roofline Model
      • Comparing Two Generations of Opterons
    • 6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the NVIDIA Tesla GPU
    • 6.12 Going Faster: Multiple Processors and Matrix Multiply
    • 6.13 Fallacies and Pitfalls
    • 6.14 Concluding Remarks
    • 6.15 Historical Perspective and Further Reading
    • 6.16 Exercises
  • Appendix A: Assemblers, Linkers, and the SPIM Simulator
    • A.1 Introduction
    • A.2 Assemblers
    • A.3 Linkers
    • A.4 Loading
    • A.5 Memory Usage
    • A.6 Procedure Call Convention
    • A.7 Exceptions and Interrupts
    • A.8 Input and Output
    • A.9 SPIM
    • A.10 MIPS R2000 Assembly Language
    • A.11 Concluding Remarks
    • A.12 Exercises
  • Appendix B: The Basics of Logic Design
    • B.1 Introduction
    • B.2 Gates, Truth Tables, and Logic Equations
    • B.3 Combinational Logic
    • B.4 Using a Hardware Description Language
    • B.5 Constructing a Basic Arithmetic Logic Unit
    • B.6 Faster Addition: Carry Lookahead
    • B.7 Clocks
    • B.8 Memory Elements: Flip-Flops, Latches, and Registers
    • B.9 Memory Elements: SRAMs and DRAMs
    • B.10 Finite-State Machines
    • B.11 Timing Methodologies
    • B.12 Field Programmable Devices
    • B.13 Concluding Remarks
    • B.14 Exercises
  • Index

UM RAFBÆKUR Á HEIMKAUP.IS

Bókahillan þín er þitt svæði og þar eru bækurnar þínar geymdar. Þú kemst í bókahilluna þína hvar og hvenær sem er í tölvu eða snjalltæki. Einfalt og þægilegt!

Þú kemst í bækurnar hvar sem er
Þú getur nálgast allar raf(skóla)bækurnar þínar á einu augabragði, hvar og hvenær sem er í bókahillunni þinni. Engin taska, enginn kyndill og ekkert vesen (hvað þá yfirvigt).

Auðvelt að fletta og leita
Þú getur flakkað milli síðna og kafla eins og þér hentar best og farið beint í ákveðna kafla úr efnisyfirlitinu. Í leitinni finnur þú orð, kafla eða síður í einum smelli.

Glósur og yfirstrikanir
Þú getur auðkennt textabrot með mismunandi litum og skrifað glósur að vild í rafbókina. Þú getur jafnvel séð glósur og yfirstrikanir hjá bekkjarsystkinum og kennara ef þeir leyfa það. Allt á einum stað.

Hvað viltu sjá? / Þú ræður hvernig síðan lítur út
Þú lagar síðuna að þínum þörfum. Stækkaðu eða minnkaðu myndir og texta með multi-level zoom til að sjá síðuna eins og þér hentar best í þínu námi.



Fleiri góðir kostir
- Þú getur prentað síður úr bókinni (innan þeirra marka sem útgefandinn setur)
- Möguleiki á tengingu við annað stafrænt og gagnvirkt efni, svo sem myndbönd eða spurningar úr efninu
- Auðvelt að afrita og líma efni/texta fyrir t.d. heimaverkefni eða ritgerðir
- Styður tækni sem hjálpar nemendum með sjón- eða heyrnarskerðingu
Eiginleikar
Vörumerki: Elsevier
Vörunúmer: 9780124077263
Taka af óskalista
Setja á óskalista

Umsagnir

Engar umsagnir
Lesa fleiri umsagnir

Computer Organization and Design: The Hardware/Software Interface

Vörumerki: Elsevier
Vörunúmer: 9780124077263
Rafræn bók. Uppl. sendar á netfangið þitt eftir kaup
9.990 kr.
Get the product now
9.990 kr.