8. Virtual Memory

Report
8. Virtual Memory
8.1 Principles of Virtual Memory
8.2 Implementations of Virtual Memory
–
–
–
–
–
Paging
Segmentation
Paging With Segmentation
Paging of System Tables
Translation Look-aside Buffers
8.3 Memory Allocation in Paged Systems
–
–
–
–
Global Page Replacement Algorithms
Local Page Replacement Algorithms
Load Control and Thrashing
Evaluation of Paging
Operating Systems
1
Principles of Virtual Memory
• For each process, the
system creates the
illusion of large
contiguous
memory space(s)
• Relevant portions of
Virtual Memory (VM)
are loaded
automatically and
transparently
• Address Map
translates Virtual
Addresses to Physical
Addresses
Operating Systems
2
Principles of Virtual Memory
• Every process has its own VM
• Single-segment VM:
– One area of 0..n-1 words
– Divided into fix-sized pages
• Multiple-Segment VM:
– Multiple areas of up to 0..n-1 (words)
– Each holds a logical segment
(e.g., function, data structure)
– Each logical segment
• may be contiguous, or
• may be divided into pages
Operating Systems
3
Main Issues in VM Design
• Address mapping
– How to translate virtual addresses to physical addresses
• Placement
– Where to place a portion of VM needed by process
• Replacement
– Which portion of VM to remove when space is needed
• Load control
– How much of VM to load at any one time
• Sharing
– How can processes share portions of their VMs
Operating Systems
4
Paged Virtual Memory
• VM is divided into fix-sized pages: page_size = 2|w|
• PM (physical memory) is divided into 2|f| page frames:
frame_size = page_size = 2|w|
• Virtual address:
va = (p,w)
• Physical address:
pa = (f,w)
– |p| determines number of pages in VM, 2|p|
– |f| determines number of frames in PM, 2|f|
– |w| determines page/frame size, 2|w|
• System loads pages into frames and translates addresses
Operating Systems
5
Paged VM Address Translation
• How to keep track of loaded pages and translate addresses?
Operating Systems
6
Paged VM Address Translation
• One solution: Frame Table:
– One entry, FT[i], for each frame
– records which page it contains
FT[i].pid records process ID
FT[i].page records page number p
– Address translation:
• Given (id,p,w), search for FT[f] where (id,p) matches
• if matching entry is found then index f is frame number
• pa = f + w;
Operating Systems
7
Address Translation via Frame Table
• Drawbacks
– Costly: Search must
be done in parallel
in hardware
– Sharing of pages:
not possible
Operating Systems
8
Address Translation via Page Table
• Page Table (PT)
one for each VM (not PM)
• Think of PT as array:
PT[p]=f
page p resides in frame f
• Implementation:
PT register points at
PT at run time
• Address translation:
pa = *(PTR+p)+w;
• Drawback:
Extra memory access
Operating Systems
9
Example: address translation
• Assume:
– page size = 512 words
– |VA| = 32 bits
– |PA| = 24 bits
– Translate the following virtual
addresses to physical:
Page# Frame#
0:
8
1:
4
2:
0
…
…
512, 30, 1024
Operating Systems
10
Demand Paging
• All pages of VM loaded initially
– Simple, but maximum size of VM = size of PM
• Pages are loaded as needed: on demand
– Additional bit in PT shows presence/absence in memory
• Page fault occurs when page is absent
– OS is invoked:
• If there is a free frame:
– load page, update PT
• If none available:
– replace one (which one? – replacement policy)
– save replaced page (if modified)
– load new page, update PTs
Operating Systems
11
VM using Segmentation
• Multiple contiguous spaces: segments
– More natural match to program/data structure
– Easier sharing (Chapter 9)
• Virtual addresses (s,w) must be translated to physical
addresses pa (but no frames)
• Where/how are segments placed in physical memory?
– Contiguous
– Paged
Operating Systems
12
Contiguous Allocation
•
•
•
•
Each segment is contiguous in physical memory
Segment Table (ST) records starting locations
Segment Table Register STR points to ST
Address translation: (s,w)
pa = *(STR+s)+w;
• Drawbacks:
– Extra memory reference
– Must also check for segment length
|w| is only the maximum segment length
– External fragmentation (variable partitions)
Operating Systems
13
Paging with segmentation
• To have multiple segments and fixed partitions:
each segment is divided into pages
• va = (s,p,w)
|s| : # of segments
(size of ST)
|p| : # of pages per segment
(size of PT)
|w| : page size
• pa = *(*(STR+s)+p)+w
• Drawback:
2 extra memory references
Operating Systems
14
Paging of System Tables
• ST or PT may be too large to keep in PM
– Divide ST or PT into pages
– Keep track by
additional table
• Paging of ST
– ST divided into pages
– Segment directory
keeps track of ST pages
– va = (s1,s2,p,w)
– pa=*(*(*(STR+s1)+s2)+p)+w
• Drawback:
3 extra memory references
Operating Systems
15
Translation Look-aside Buffer
• TLB avoids some
memory accesses
– Keep most recent
address translations
in associative memory:
(s,p) | f
– Bypass translation if
match on (s,p,*)
– If no match, replace
one entry
• TLB ≠ cache
– TLB keeps frame #s
– Cache keeps data values
Operating Systems
16
Memory Allocation with Paging
• Placement policy: Any free frame is OK
• Replacement:
– There are many algorithms
– Main goal: minimize data movement between physical
memory and secondary storage
• How to compare different algorithms:
– Count number of page faults
– Use Reference String (RS) : r0 r1 ... rt …
rt is the (number of the) page referenced at time t
• Two types of replacement strategies:
– Global replacement: Consider all resident pages, regardless
of owner
– Local replacement: Consider only pages of faulting process
Operating Systems
17
Global page replacement
• Optimal (MIN): Replace page that will not be referenced
for the longest time in the future
Time t |
RS
|
Frame 0|
Frame 1|
Frame 2|
Frame 3|
IN
|
OUT
|
0|
|
a|
b|
c|
d|
|
|
1
c
a
b
c
d
2
a
a
b
c
d
3
d
a
b
c
d
4
b
a
b
c
d
5
e
a
b
c
e
e
d
6
b
a
b
c
e
7
a
a
b
c
e
8
b
a
b
c
e
9 10
c d
a d
b b
c c
e e
d
a
• Problem: Need entire reference string (i.e., need to know
the future)
Operating Systems
18
Global Page Replacement
• Random Replacement: Replace a randomly chosen page
– Simple but
– Does not exploit locality of reference
• Most instructions are sequential
• Most loops are short
• Many data structures are accessed sequentially
Operating Systems
19
Global page replacement
• First-In First-Out (FIFO): Replace oldest page
Time t | 0| 1
RS
Frame
Frame
Frame
Frame
IN
OUT
2
3
4
5
6
7
8
9 10
| | c a d b e b a b c d
0|>a|>a >a >a >a e e e e >e d
1| b| b b b b >b >b a a a >a
2| c| c c c c c c >c b b b
3| d| d d d d d d d >d c c
| |
e
a b c d
| |
a
b c d e
• Problem:
– Favors recently accessed pages, but
– Ignores when program returns to old pages
Operating Systems
20
Global Page Replacement
• LRU: Replace Least Recently Used page
Time t |
RS
|
Frame 0|
Frame 1|
Frame 2|
Frame 3|
IN
|
OUT
|
Q.end |
|
|
Q.head |
Operating Systems
0|
|
a|
b|
c|
d|
|
|
d|
c|
b|
a|
1
c
a
b
c
d
2
a
a
b
c
d
3
d
a
b
c
d
4
b
a
b
c
d
c
d
b
a
a
c
d
b
d
a
c
b
b
d
a
c
5
e
a
b
e
d
e
c
e
b
d
a
6
b
a
b
e
d
7
a
a
b
e
d
8
b
a
b
e
d
b
e
d
a
a
b
e
d
b
a
e
d
9 10
c d
a d
b b
e d
c c
c d
d e
c d
b c
a b
e a
21
Global page replacement
• LRU implementation
– Software queue: too expensive
– Time-stamping
• Stamp each referenced page with current time
• Replace page with oldest stamp
– Hardware capacitor with each frame
• Charge at reference
• Charge decays exponentially
• Replace page with smallest charge
– n-bit aging register with each frame
• Shift all registers to right periodically
• Set left-most bit of referenced page to 1
• Replace page with smallest value
– Simpler algorithms that approximate LRU algorithm
Operating Systems
22
Global Page Replacement
• Second-chance algorithm
– Approximates LRU
– Implement use-bit u with each frame
– Set u=1 when page referenced
– To select a page:
if u==0, select page
else, set u=0 and consider next frame
– Used page gets a second chance to stay in memory
Operating Systems
23
Global page replacement
• Second-chance algorithm
…
4
5
6
7
8
9
10
… b
e
b
a
b
c
d
… >a/1 e/1 e/1 e/1 e/1 >e/1 d/1
… b/1 >b/0 >b/1 b/0 b/1 b/1 >b/0
… c/1 c/0 c/0 a/1 a/1 a/1 a/0
… d/1 d/0 d/0 >d/0 >d/0 c/1 c/0
…
e
a
c
d
Operating Systems
24
Global Page Replacement
• Third-chance algorithm
– Second chance algorithm does not distinguish between read
and write access
• write access is more expensive
– Give modified pages a third chance:
• use-bit u set at every reference (read and write)
• write-bit w set at write reference
• to select a page, cycle through frames,
resetting bits, until uw==00:
uw uw
11
01
10
00
01
0 0 * (remember modification)
00
(select page for replacement)
Operating Systems
25
Global Page Replacement
• Third-chance algorithm
Read → 10 → 00 → Select
Write → 11 → 01 → 00* → Select
…
0
…
>a/10
… b/10
… c/10
… d/10
… IN
… OUT
| 1
2
3
4
5
6
7
8
9
10
.
| c
aw
d
bw
e
b
aw
b
c
d
.
|>a/10 >a/11 >a/11 >a/11 a/00* a/00* a/11 a/11 >a/11 a/00*
| b/10 b/10 b/10 b/11 b/00* b/10* b/10* b/10* b/10* d/10
| c/10 c/10 c/10 c/10 e/10 e/10 e/10 e/10 e/10 >e/00
| d/10 d/10 d/10 d/10 >d/00 >d/00 >d/00 >d/00 c/10 c/00 .
|
e
c
d
|
c
d
b
Operating Systems
26
Local Page Replacement
• Measurements indicate that every program needs a
minimum set of pages to be resident in memory
– If too few, thrashing occurs (program spends most of
its time loading/saving pages)
– If too many, page frames are wasted
• Goal: maintain an optimal set of pages for each process in
memory
– Size of optimal set changes over time
– Optimal set must be resident
Operating Systems
27
Local Page Replacement
• Optimal (VMIN)
– Define a sliding window (t, t+)
–  is a parameter (constant)
– At any time t, maintain as resident all pages visible in
window
• Guaranteed to generate minimum number of page faults
• Requires knowledge of future
Operating Systems
28
Local page replacement
• Optimal (VMIN) with =3
Time
RS
Page
Page
Page
Page
Page
IN
OUT
t |
|
a |
b |
c |
d |
e |
|
|
0|
d|
-|
-|
-|
x|
-|
|
|
1
c
x
x
c
2
c
x
x
-
3
d
x
x
-
4
b
x
x
b
d
5
c
x
b
6
e
x
x
e
7
c
x
x
8
e
x
c
9
a
x
a
e
10
d
x
d
a
• Unrealizable without entire reference string (knowledge of
future)
Operating Systems
29
Local Page Replacement
• Working Set Model:
– Uses principle of locality:
near-future requirements are approximated by
recent-past requirements
– Use trailing window (instead of future window)
– Working set W(t,) consists of all pages referenced
during the interval (t–,t)
– At time t:
• Remove all pages not in W(t,)
• Process may run only if entire W(t,) is resident
Operating Systems
30
Local Page Replacement
• Working Set Model with =3
Time t |
RS[e,d]|
Page a |
Page b |
Page c |
Page d |
Page e |
IN
|
OUT
|
0|
a|
x|
-|
-|
x|
x|
|
|
1
c
x
x
x
x
c
2
c
x
x
x
e
3
d
x
x
x
-
4
b
x
x
x
b
a
5
c
x
x
x
-
6
e
x
x
x
x
e
7
c
x
x
x
8
e
x
x
d
b
9 10
a d
x x
- x x
- x
x x
a d
.
• Drawback: costly to implement
• Approximate (aging registers, time stamps)
Operating Systems
31
Local Page Replacement
• Page fault frequency (PFF)
• Conflicting goals of any page replacement policy
– Keep frequency of page faults acceptably low
– Keep resident page set from growing unnecessarily large
• PFF: measure and use frequency of page faults directly
• Uses a parameter 
• Only adjust resident set when a page fault occurs:
– If time between page faults   (too many faults)
• Add new page to resident set
– If time between page faults >  (could run with fewer)
• Add new page to resident set
• Remove all pages not referenced since last page fault
Operating Systems
32
Local Page Replacement
• Page Fault Frequency with τ=2
Time
RS
Page
Page
Page
Page
Page
IN
OUT
t |
|
a |
b |
c |
d |
e |
|
|
Operating Systems
0|
|
x|
-|
-|
x|
x|
|
|
1
c
x
x
x
x
c
2
c
x
x
x
x
3
d
x
x
x
x
4
b
x
x
x
b
ae
5
c
x
x
x
-
6
e
x
x
x
x
e
7
c
x
x
x
x
8
e
x
x
x
x
9 10
a d
x x
- x x
- x
x x
a d
bd
33
Load Control and Thrashing
• Main issues:
– How to choose the degree of multiprogramming?
• Each process needs it ws resident to avoid thrashing
– When degree must decrease, which process should be
deactivated?
– When a process is reactivated, which of its pages should be
loaded?
• Load Control: Policy to set number and type of concurrent
processes
Operating Systems
34
Load Control and Thrashing
• Choosing degree of multiprogramming
• Local replacement:
– WS of any process
must be resident:
– Automatic control
• Global replacement
– No WS concept
– Use CPU utilization
as a criterion:
– more processes increase
utilization, but
– with too many processes,
L=mean time between faults
thrashing occurs
S=mean page fault service time
Operating Systems
35
Load Control and Thrashing
• How to find Nmax?
– L=S criterion:
• Page fault service time S needs to keep up with
mean time between page faults L
• S is known
• Measure L:
– if greater than S, increase N
– if less than S, decrease N
• This effectively implements a PFF policy
Operating Systems
36
Load Control and Thrashing
• Which process to deactivate
– Lowest priority process
– Faulting process
– Last process activated
– Smallest process
– Largest process
• Which pages to load
when process activated
– Prepage last resident set
Operating Systems
37
Evaluation of Paging
• Process references large fraction of pages at the start
• Prepaging is important
– Initial set can be
loaded more
efficiently than by
individual page faults
Operating Systems
38
How large should be a page?
• Page size should be small because
– Few instructions are executed per page
– Page fault rate is better
• Page size should be large because they need
– Smaller page tables
– Less hardware
– Less I/O overhead
Operating Systems
39
Evaluation of Paging
• Load control is important
– W: minimum amount
of memory to avoid
thrashing.
Operating Systems
40

similar documents