Computer Architecture Class Review – Virtual Memory

This is the first part of the review for the Final Exam in COMP 425, computer architecture. This review post covers (1) virtual memory

Lecture 24 Virtual Memory

Why do we need virtual memory?

  1. Caching Problem: Main memory is too small for the a single problem (use DRAM as a cache for the main disk). Disk is 10000 x slower than Main Memory
  2. Memory Management Problem
    1. Efficient sharing of the main memory for multiple processes (Each process gets the same uniform linear address space, the illusion that it owns the memory exclusively)
    2. Multiple programs might not fit in the main memory (catching problem)
  3. Memory Protection Problem
    1. Provide a certain isolation for different running tasks
      1. One process can’t interfere with another’s memory
      2. User program cannot access privileged kernel information

Dram cache organization

  • Features
    • Dram is 10x slower than SRAM, 10,000x faster than disk
  • Design choices
    • Large page size: typically 4-8 KB, sometimes 4MB
    • Fully associative
      • Any VP (virtual page) can be placed in any physical page
      • requires a large mapping function
    • Highly sophisticated
    • Write-back rather than write-through
      • i.e. write back to disk only when you really, really need to

Address Translation (Virtual to Physical) and Page Table working protocols

  • Address Composition
    • Virtual address
      • virtual page number (VPN) [first 0-8 bits]
      • virtual page offset (VPO) [next 8 – 14 bits]
    • Physical address
      • physical page number (PPN) [first 0 – 6 bits]
      • physical page offset (PPO) [next 6 – 12 bits]
    • Note: Virtual address space is larger than physical dress spec
  • VPO == PPO, the translated part is (VPN -> PPN)
  • Page Hit: reference to VM word that in physical memory
  • Page Miss: reference to VM work not in physical memory
    • Choose a victim page and evict it
    • Restart the offending instruction

Why Virtual Memory works?? (As a cache)

  • The programs tend to have locality (temporal and spatial)
    • At any point in time, programs tend to access a set of active virtual pages called the working set
    • If (working set size < main memory size) -> good performance
    • SUM(working set) > main memory size -> Thrashing, performance meltdown where pages are swapped in and out of the main memory continuously

VM as a Tool for Memory Management and Protection

  • Memory allocation
    • Each virtual page can be mapped to any physical page
    • A virtual page can be stored in different physical pages at different time
  • Sharing code and data among processes
    • Map multiple virtual pages to the same physical page, such as read-only library code
  • Extend the Page Table Entries with permission bits: prevents user code from accessing kernel code


To speed up the page table query, TLB (Translation Look aside Buffer was introduced). The graph below demonstrates the workflow of virtual memory



  • Programmer’s view of virtual memory
    • Each process has its own private linear address space
    • Cannot be corrupted by other proceses
  • System view of virtual memory
    • Uses memory efficiently by caching virtual memory pages
      • Efficient only because of locality
    • Simplifies memory management and programming
    • Simplifies protection by providing a convenient point to check permissions
This entry was posted in Architecture, Class and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s