Lec-23

Report
CS 253: Algorithms
Chapter 23
Minimum Spanning Tree
Credit: Dr. George Bebis
Minimum Spanning Trees

Spanning Tree
◦ A tree (i.e., connected, acyclic graph) which contains all the
vertices of the graph

Minimum Spanning Tree
◦ Spanning tree with the minimum sum of weights
8
b
4
i
h
1
9
e
14
4
6
7
8

d
2
11
a
7
c
10
g
2
f
Spanning forest
◦ If a graph is not connected, then there is a spanning tree for each
connected component of the graph
Sample applications of MST
◦ Find the least expensive way to connect
a set of cities, terminals, computers, etc.
A town has a set of houses. A road connects 2 and only
2 houses. Repair Cost for road (u,v) is w(u, v)
Problem:
b 8 c
Repair enough roads such that:
4
2
1. Everyone stays connected
a 11
i
2. Total repair cost is minimum

8
3
6
7
h
1
g
7
d
14
4
2
f
9
e
10
Properties of Minimum Spanning Trees

Minimum spanning tree is not unique

MST has no cycles (by definition) :

# of edges in a MST:
|V| - 1
Prim’s Algorithm

Starts from an arbitrary “root”:VA = {a}

At each step:
◦ Find a light edge crossing (VA, V - VA)
◦ Add this edge to set A (The edges in set A always form a single tree)
◦ Repeat until the tree spans all vertices
8
b
4
d
2
11
a
7
c
i
h
1
e
14
4
6
7
8
9
10
g
2
f
How to Find Light Edges Quickly?

Use a priority queue Q contains vertices not yet included in the tree, i.e. (V - VA)
VA = {a},

Q = {b, c, d, e, f, g, h, i}
We associate a key with each vertex v in Q:
key[v] = minimum weight of any edge (u, v) connecting v to VA

After adding a new node to VA we update the weights of all the nodes adjacent
to it.
e.g., after adding a to the tree, Key[b]=4 and Key[h]=8
8
b
4
c
d
9
2
11
a
7
i
6
7
8
h
14
4
1
e
10
g
2
f
Example

b
4

i
11
a
h

4
h
8
g

2

c
7
4
f


d
9
14
6
1

e
10
 2
i
7
8
9
6
8
11
a

d
14
4
1
4

b
7
2
7
8

c
8
10
g

2
f

0  
Q = {a, b, c, d, e, f, g, h, i}
VA = 
Extract-MIN(Q)  a
key [b] = 4
key [h] = 8

e
 [b] = a
 [h] = a
4 8
Q = {b, c, d, e, f, g, h, i} VA = {a}
Extract-MIN(Q)  b
Example
4
8
b
4
h
8
1
4
4
h
8
g

2
1

e
f

7
c
4
d
Q = {c, d, e, f, g, h, i}
VA = {a, b}
key [c] = 8
 [c] = b
key [h] = 8
 [h] = a - unchanged
8 8
Extract-MIN(Q)  c
Q = {d, e, f, g, h, i}
7

8
9
14
6
7
8
9
10
2 2

i
11
a
d
14
4
8
b

6
7
8
7
c
 2
i
11
a
8


e
key [d] = 7
key [f] = 4
key [i] = 2
VA = {a, b, c}
 [d] = c
 [f] = c
 [i] = c
10
g

2
f

4
7 4 8 2
Extract-MIN(Q)  i
Example
8
4
8
b
4
h
87
8
4
h
7
7
7
1
d
9
10
6
2
2
f
10
e
14
4
g
Q = {d, e, f, g, h} VA = {a, b, c, i}
key [h] = 7
 [h] = i
key [g] = 6
 [g] = i
7 46 7
Extract-MIN(Q)  f
f
4
6
7
8
2
c
2 2
i
11
a

e
10
8
4
9
14
4
g

6
1
b
d
6
7
8
c
2 2
i
11
a
7
7
Q = {d, e, g, h} VA = {a, b, c, i, f}
key [g] = 2  [g] = f
key [d] = 7  [d] = c unchanged
key [e] = 10  [e] = f
7 10 2 7
Extract-MIN(Q)  g
4
9
Example
8
4
8
b
4
h
2
4
4
8
7
h
1
d
1
9
10
e
14
4
6
7
8
7
c
2 2
i
11
e
Q = {d, e, h} VA = {a, b, c, i, f, g}
key [h] = 1
 [h] = g
7 10 1
Extract-MIN(Q)  h
f
2
4
10
10
71
8
9
14
4
g
1
b
a
d
6
7
8
7
c
2 2
i
11
a
7
Q = {d, e} VA = {a, b, c, i, f, g, h}
7 10
Extract-MIN(Q)  d
10
g
2
2
f
4
10
Example
Q = {e}
8
4
8
b
4
h
1
d
1
key [e] = 9
9 9
10
10
g
2
2
f
4
 [e] = d
9
e
14
4
6
7
8
7
c
2 2
i
11
a
7
VA = {a, b, c, i, f, g, h, d}
Extract-MIN(Q)  e
Q=
11
VA = {a, b, c, i, f, g, h, d, e}
PRIM(V, E, w, r)
1.
2.
Total time: O(VlgV + ElgV) = O(ElgV)
Q← 
for each u  V
3.
do key[u] ← ∞
4.
π[u] ← NIL
5.
INSERT(Q, u)
O(V) if Q is implemented
as a min-heap
O(lgV)
6.
DECREASE-KEY(Q, r, 0)
7.
while Q  
8.
9.
10.
11.
12.
% r : starting vertex
% key[r] ← 0
Executed |V| times
do u ← EXTRACT-MIN(Q)
for each v  Adj[u]
Takes O(lgV)
Executed O(E) times total
do if v  Q and w(u, v) < key[v]
then π[v] ← u
Min-heap
operations:
O(VlgV)
Constant
Takes O(lgV)
DECREASE-KEY(Q, v, w(u, v))
O(ElgV)
Prim’s Algorithm

Total time: O(ElgV )

Prim’s algorithm is a “greedy” algorithm
◦ Greedy algorithms find solutions based on a sequence of choices
which are “locally” optimal at each step.

Nevertheless, Prim’s greedy strategy produces a globally
optimum solution!
13
Kruskal’s Algorithm

Start with each vertex being its
own component

Repeatedly merge two
components into one by choosing
the lightest edge that connects
them

Which components to consider at
each iteration?
8
b
4
d
2
11
a
7
c
i
h
1
10
g
2
f
We would add
edge (c, f)
◦ Scan the set of edges in
monotonically increasing order by
weight. Choose the smallest edge.
14
e
14
4
6
7
8
9
Example
8
b
4
i
h
1
(h, g)
(c, i), (g, f)
(a, b), (c, f)
(i, g)
(c, d), (i, h)
9
e
14
4
6
7
8
1:
2:
4:
6:
7:
d
2
11
a
7
c
10
g
2
f
8: (a, h), (b, c)
9: (d, e)
10: (e, f)
11: (b, h)
14: (d, f)
{a}, {b}, {c}, {d}, {e}, {f}, {g}, {h}, {i}
1.
Add (h, g)
{g, h}, {a}, {b}, {c},{d},{e},{f},{i}
2.
Add (c, i)
{g, h}, {c, i}, {a}, {b}, {d}, {e}, {f}
3.
Add (g, f)
{g, h, f}, {c, i}, {a}, {b}, {d}, {e}
4.
Add (a, b)
{g, h, f}, {c, i}, {a, b}, {d}, {e}
5.
Add (c, f)
{g, h, f, c, i}, {a, b}, {d}, {e}
6.
Ignore (i, g)
{g, h, f, c, i}, {a, b}, {d}, {e}
7.
Add (c, d)
{g, h, f, c, i, d}, {a, b}, {e}
8.
Ignore (i, h)
{g, h, f, c, i, d}, {a, b}, {e}
9.
Add (a, h)
{g, h, f, c, i, d, a, b}, {e}
10.
Ignore (b, c) {g, h, f, c, i, d, a, b}, {e}
11.
Add (d, e)
12.
Ignore (e, f) {g, h, f, c, i, d, a, b, e}
13.
Ignore (b, h) {g, h, f, c, i, d, a, b, e}
14.
Ignore (d, f)
{g, h, f, c, i, d, a, b, e}
{g, h, f, c, i, d, a, b, e}
Operations on Disjoint Data Sets




Kruskal’s Alg. uses Disjoint Data Sets (UNION-FIND : Chapter 21) to
determine whether an edge connects vertices in different components
MAKE-SET(u) – creates a new set whose only member is u
FIND-SET(u) – returns a representative element from the set that contains u.
It returns the same value for any element in the set
UNION(u, v) – unites the sets that contain u and v, say Su and Sv
◦ E.g.: Su = {r, s, t, u},
Sv = {v, x, y}
UNION (u, v) = {r, s, t, u, v, x, y}

We had seen earlier that FIND-SET can be done in O(lgn) or O(1) time
and UNION operation can be done in O(1)
(see Chapter 21)
KRUSKAL(V, E, w)
1.
2.
3.
4.
5.
6.
7.
8.
9.
A← 
for each vertex v  V
O(V)
do MAKE-SET(v)
sort E into non-decreasing order by w
for each (u, v) taken from the sorted list
do if FIND-SET(u)  FIND-SET(v)
then A ← A  {(u, v)}
UNION(u, v)
return A
Running time:



O(V+ElgE+ElgV)
O(ElgE)
O(E)
O(lgV)
 O(ElgE)
Implemented by using the disjoint-set data structure (UNION-FIND)
Kruskal’s algorithm is “greedy”
It produces a globally optimum solution
Problem 1
Compare Prim’s algorithm with Kruskal’s algorithm assuming:
(a) Sparse graphs:  E=O(V)
Kruskal: UNION-FIND:
O(ElgE) = O(VlgV)
Prim:
O(ElgV) = O(VlgV)
Binary heap:
(b) Dense graphs:  E = O(V2)
Kruskal: O(ElgE) = O(V2lgV2) = O(2V2lgV) = O(V2lgV)
Prim:
Binary heap: O(ElgV) = O(V2lgV)
Problem 2
Analyze the running time of Kruskal’s algorithm when
weights are in the range [1 … V]
ANSWER:
• Sorting can be done in O(E) time (e.g., using counting sort)
• However, overall running time will not change, i.e., O(ElgV)
19
Problem 3

Suppose that some of the weights in a connected graph G are negative.
Will Prim’s algorithm still work? What about Kruskal’s algorithm?
Justify your answers.
ANSWER:
Yes, both algorithms will work with negative weights.
There is no assumption in the algorithm about the weights being positive.
20
Problem 4
Analyze Prim’s algorithm assuming:
(a) an adjacency-list representation of G
O(ElgV)
(b) an adjacency-matrix representation of G
O(ElgV+V2)
(see next slide)
21
PRIM(V, E, w, r)
1.
2.
Total time: O(VlgV + ElgV) = O(ElgV)
Q← 
for each u  V
3.
do key[u] ← ∞
4.
π[u] ← NIL
5.
INSERT(Q, u)
O(V) if Q is implemented
as a min-heap
O(lgV)
6.
DECREASE-KEY(Q, r, 0)
7.
while Q  
8.
9.
10.
11.
12.
% key[r] ← 0
Executed |V| times
do u ← EXTRACT-MIN(Q)
for each v  Adj[u]
Takes O(lgV)
Executed O(E) times total
do if v  Q and w(u, v) < key[v]
then π[v] ← u
Min-heap
operations:
O(VlgV)
Constant
Takes O(lgV)
DECREASE-KEY(Q, v, w(u, v))
O(ElgV)
PRIM(V, E, w, r)
1.
2.
Total time:
Q← 
O(ElgV + V2)
for each u  V
3.
do key[u] ← ∞
4.
π[u] ← NIL
5.
INSERT(Q, u)
O(V) if Q is implemented
as a min-heap
O(lgV)
6.
DECREASE-KEY(Q, r, 0)
7.
while Q  
% key[r] ← 0
Executed |V| times
Min-heap
operations:
O(VlgV)
8.
do u ← EXTRACT-MIN(Q)
Takes O(lgV)
9.
for (j=0; j<|V|; j++)
Executed O(V2) times total
10.
11.
12.
do if (A[u][j]=1) and (vQ) and (w(u, v)<key[v])
then π[v] ← u
Takes O(lgV)
DECREASE-KEY(Q, v, w(u, v))
O(ElgV)
Problem 5

Find an algorithm for the “maximum” spanning tree. That is,
given an undirected weighted graph G, find a spanning tree of G
of maximum cost. Prove the correctness of your algorithm.
◦ Consider choosing the “heaviest” edge (i.e., the edge associated
with the largest weight) in a cut. The generic proof can be modified
easily to show that this approach will work.
◦ Alternatively, multiply the weights by -1 and apply either Prim’s or
Kruskal’s algorithms without any modification at all!
24
Problem 6
Let T be a MST of a graph G, and let L be the sorted list of the edge weights of
T. Show that for any other MST T’ of G, the list L is also the sorted list of the
edge weights of T’
T, L = {1,2}
T’, L’ = {1,2}
Proof:
Kruskal’s algorithm will find T in the order specified by L.
Similarly, if T’ is also an MST, Kruskal’s algorithm should be able to find it in
the sorted order, L’.
If L’  L, then there is a contradiction!
Because at the point where L and L’ differ, Kruskal’s should have picked the
smaller of the two and therefore L’ is impossible to obtain.

similar documents