chapter7

Report
Princess Nora University
Modeling and Simulation
Markov chain
Arwa Ibrahim Ahmed
1
Markov chain
2
Markov chain, named after Andrey Markov, is a
mathematical system that undergoes transitions from
one state to another, between a finite or countable
number of possible states. It is a random process
usually characterized as memoryless: the next state
depends only on the current state and not on the
sequence of events that preceded it. This specific kind
of "memorylessness" is called the Markov property.
2
Markov chain :
3
Formally, a Markov chain is a random process with the Markov property.
Often, the term "Markov chain" is used to mean a Markov process which
has a discrete (finite or countable) state-space. Usually a Markov chain is
defined for a discrete set of times (i.e., a discrete-time Markov chain)
although some authors use the same terminology where "time" can take
continuous values.
A discrete-time random process involves a system which is in a certain state at
each step, with the state changing randomly between steps. The steps are
often thought of as moments in time, but they can equally well refer to
physical distance or any other discrete measurement; formally, the steps are
the integers or natural numbers, and the random process is a mapping of
these to states. The Markov property states that the conditional probability
distribution for the system at the next step (and in fact at all future steps)
depends only on the current state of the system, and not additionally on the
state of the system at previous steps.
3
Markov chain:
4
Since the system changes randomly, it is generally impossible to predict with
certainty the state of a Markov chain at a given point in the future.
However, the statistical properties of the system's future can be predicted.
In many applications, it is these statistical properties that are important.
The changes of state of the system are called transitions, and the probabilities
associated with various state-changes are called transition probabilities.
The set of all states and transition probabilities completely characterizes a
Markov chain. By convention, we assume all possible states and transitions
have been included in the definition of the processes, so there is always a
next state and the process goes on forever.
4
Applications of Markov chain:
5
The application and usefulness of Markov chains:
Information sciences:
Markov chains are used throughout information processing. Claude Shannon's
famous 1948 paper A mathematical theory of communication, which in a
single step created the field of information theory, opens by introducing the
concept of entropy through Markov modeling of the English language.
Queuing theory
Markov chains are the basis for the analytical treatment of queues (queuing
theory). Agner Krarup Erlang initiated the subject in 1917. This makes them
critical for optimizing the performance of telecommunications networks,
where messages must often compete for limited resources
5
Applications of Markov chain:
6
Internet applications
The Page Rank of a webpage as used by Google is defined by a
Markov chain.It is the probability to be at page j in the
stationary distribution on the following Markov chain on all
(known) WebPages.
Economics and finance
Markov chains are used in Finance and Economics to model a
variety of different phenomena, including asset prices and
market crashes. The first financial model to use a Markov chain
was from Prasad et al. in 1974.
6
Applications of Markov chain:
7
Social sciences
Markov chains are generally used in describing path-dependent arguments,
where current structural configurations condition future outcomes. An
example is the reformulation of the idea, originally due to Karl Marx's Das
Kapital, tying economic development to the rise of capitalism. In current
research, it is common to use a Markov chain to model how once a country
reaches a specific level of economic development.
Games
Markov chains can be used to model many games of chance. The children's
games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are
represented exactly by Markov chains.
7
DISCRETE-TIME FINITE-STATE MARKOV CHAINS
8
Discrete-time Markov chains :
Let X be a discrete random variable, indexed by time t as X(t),
that evolves in time as follows.
X(t) ∈ X for all t = 0, 1, 2, . . . .
State transitions can occur only at the discrete times t = 0, 1, 2, .
. . . and at these times
the random variable X(t) will shift from its current state x ∈ X to
another state, say X’ ∈ X, with fixed probability
p(x; x’) = Pr(X(t + 1) = x’ | X(t) = x) ≥ 0.

8
DISCRETE-TIME FINITE-STATE MARKOV CHAINS:
9
If | X | < 1 the stochastic process defend by X(t)
is called a discrete-time, finite-state Markov
chain.
Without loss of generality, throughout this section
we assume that the finite state space is
X = {0, 1, 2, . . . , k}
where k is a finite, but perhaps quite large,
integer.
9
DISCRETE-TIME FINITE-STATE MARKOV CHAINS:
10
A discrete-time, finite-state Markov chain is completely characterized by the initial
state at t = 0, X(0), and the function p(x; x’) defined for all (x, x’) ∈ X X X. When the
stochastic process leaves the state x the transition must be either to state x’ = 0 with
probability p(x, 0), or to state x’ = 1 with probability p(x, 1), . . . . , or to state x’ = k
with probability p(x, k), and the sum of these probabilities must be 1. That is

k
x ' 0
p( x, x' )  1
x = 0, 1, . . . . , k.
Because p(x, x’) is independent of t for all (x, x’), the Markov chain is said to be
homogeneous or stationary.
10
DISCRETE-TIME FINITE-STATE MARKOV CHAINS:
11
The state transition probability p(x; x’)
represents the probability of
a transition from state x to state x’. The
corresponding (k + 1) X (k + 1) matrix
P=
.(0, k ) 
 p (0,0) p (0,1).........
 p (1,0) p (1, 2).........
.(1, k ) 


..........
..........
..........
....... 


..........
..........
....... 
..........
..........
..........
..........
....... 


 p ( k ,0) p ( k ,1).......p ( k , k ) 


with elements p(x, x’) is called the state transition matrix.
11
DISCRETE-TIME FINITE-STATE MARKOV CHAINS:
12
The elements of the state transition matrix p are
non-negative and the elements of each row
sum to 1.0.
(A matrix with these properties is said to be a
stochastic matrix.)
12
EXAMPLE:
13
If we know the probability that the child of a lowerclass parent becomes middle-class or upper-class,
and we know similar information for the child of a
middle-class or upper-class parent, what is the
probability that the grandchild of a lower –class
parent is middle or upper class?
13
EXAMPLE:
14


in sociology, it is convenient to classify people by
income as lower-class ,middle-class and upper-class.
Sociologists have found that the strongest
determinate of the income class of an individual is
the income class of the individual's parents.
for example , if an individual in the lower-income
class is said to be in state 1, an individual in the
middle-income class is in state 2,and an individual
in the upper -income class is in state3, then the
following probabilities of change in income class
from one generation to the next might apply.
14
EXAMPLE:
15
Table1 shows that if an individual is in state1 (lower
income class) then there is a probability of 0.65
that any offspring will be in the lower-income class
,a probability of 0.28 that offspring will be in the
middle income class ,and a probability of 0.07 that
offspring will be in the upper-income class.
15
EXAMPLE:
16
state
1
2
3
1
0.65
0.28
0.07
2
0.15
0.67
0.18
3
0.12
0.36
0.52
The symbol Pij will be used for the probability of transaction
from state I to state j in one generation . For example , p23
represents the probability that a person in state 2 will have
offspring in state 3 , from that table above , p23 =0.18 .
Also from the table ,p31= 0.12 , p22=0.67 , and so on
16
EXAMPLE:
17
The information from table can be written in other
forms . Figure 1 is a transition diagram that shows
the three states and probabilities of going from one
state to another.
17
EXAMPLE:
18
0.28
0.65
1
2
0.15
0.67
0.12
0.18
0.07
0.52
3
0.36
18
EXAMPLE:
19
In a transition matrix, the states are indicated at the
side and the top .if P represent the transition matrix
for the table above, then
0.56
0.15
0.12
0.28
0.67
0.36
0.07
0.18
0.52
19
EXAMPLE:
20
A transition matrix has several features:
1.
2.
3.
It is square , since all possible states must be used
both as rows as columns.
All entries are between 0 and 1 , inclusive ; this is
because all entries represent probabilities.
The sum of the entries in any row must be 1.
20
EXAMPLE:
21
The transition matrix P shows the probability of change in income class from
one generation to the next . now let us investigate the probability for
change in income class over two generation. For example ,if a parent in
state 3(the upper income class) .what is the probability that a grandchild
will be in state 2?
To find out, start with a tree diagram ,as shown in fig2.
The various probabilities come from transition matrix P.
The arrows point to the outcomes “grandchild in state 2”
grandchild in state2 is given by the sum of the probabilities indicated with
arrows ,or
0.0336+0.2412+0.1872=4620
21
EXAMPLE:
22
1
1
2
O.21
3
O.36
3
1
2
O.52
3
1
2
3
(0.12) (0.65)=0.078
(0.12) (0.28)=0.0336
(0.12) (0.07)=0.0084
(0.36) (0.15)=0.054
2
(0.36) (067)=0.02412
3
(0.36) (0.18)=0.0648
(0.52) (012)=0.0624
(0.52) (0.36)=0.1872
(0.52) (0.52)=0.2704
22

similar documents