### Statistical Decision Theory

```Statistics for
7th Edition
Chapter 18
Statistical Decision Theory
Ch. 18-1
Chapter Goals
After completing this chapter, you should be
able to:

Describe basic features of decision making

Construct a payoff table and an opportunity-loss table

Define and apply the expected monetary value criterion for
decision making

Compute the value of sample information

Describe utility and attitudes toward risk
Ch. 18-2
18.1

Steps in Decision Making
List Alternative Courses of Action


List States of Nature


Possible events or outcomes
Determine ‘Payoffs’


Choices or actions
Associate a Payoff with Each Event/Outcome
combination

Evaluate Criteria for Selecting the Best Course
of Action
Ch. 18-3
List Possible Actions or Events
Two Methods
of Listing
Payoff Table
Decision Tree
Ch. 18-4
Payoff Table

Form of a payoff table

Mij is the payoff that corresponds to action ai and
state of nature sj
States of nature
Actions
s1
s2
...
sH
a1
a2
.
.
.
aK
M11
M21
.
.
.
MK1
M12
M22
.
.
.
MK2
...
...
.
.
.
...
M1H
M2H
.
.
.
MKH
Ch. 18-5
Payoff Table Example
A payoff table shows actions (alternatives),
states of nature, and payoffs
Investment
Choice
(Action)
Large factory
Average factory
Small factory
Profit in \$1,000’s
(States of nature)
Strong
Stable
Weak
Economy
Economy
Economy
200
90
40
50
120
30
-120
-30
20
Ch. 18-6
Decision Tree Example
Large factory
Average factory
Small factory
Strong Economy
200
Stable Economy
50
Weak Economy
-120
Strong Economy
90
Stable Economy
120
Weak Economy
-30
Strong Economy
40
Stable Economy
30
Weak Economy
20
Payoffs
Ch. 18-7
18.2
Decision Making Overview
Decision Criteria
No probabilities
known
*
Probabilities
are known
Nonprobabilistic Decision Criteria:
Decision rules that can be applied
if the probabilities of uncertain
events are not known
 maximin criterion
 minimax regret criterion
Ch. 18-8
The Maximin Criterion

Consider K actions a1, a2, . . ., aK and H possible states of nature
s1, s2, . . ., sH

Let Mij denote the payoff corresponding to the ith action and jth state
of nature

For each action, find the smallest possible payoff and denote the
minimum M1* where
M1*  Min(M11,M12 ,,M1H )

More generally, the smallest possible payoff for action ai is given by
Mi*  (M11,M12 ,,M1H )

Maximin criterion: select the action ai for which the
corresponding Mi* is largest (that is, the action with the
greatest minimum payoff)
Ch. 18-9
Maximin Example
The maximin criterion
1. For each option, find the minimum payoff
Investment
Choice
(Alternatives)
Profit in \$1,000’s
(States of Nature)
1.
Strong
Economy
Stable
Economy
Weak
Economy
Minimum
Profit
Large factory
200
50
-120
Average factory
90
120
-30
Small factory
40
30
20
-120
-30
20
Ch. 18-10
Maximin Solution
(continued)
The maximin criterion
1. For each option, find the minimum payoff
2. Choose the option with the greatest minimum payoff
Investment
Choice
(Alternatives)
Profit in \$1,000’s
(States of Nature)
1.
Strong
Economy
Stable
Economy
Weak
Economy
Minimum
Profit
Large factory
200
50
-120
Average factory
90
120
-30
Small factory
40
30
20
-120
-30
20
2.
Greatest
minimum
is to
choose
Small
factory
Ch. 18-11
Regret or Opportunity Loss

Suppose that a payoff table is arranged as a
rectangular array, with rows corresponding to
actions and columns to states of nature

If each payoff in the table is subtracted from the
largest payoff in its column . . .

. . . the resulting array is called a regret table, or
opportunity loss table
Ch. 18-12
Minimax Regret Criterion

Consider the regret table

For each row (action), find the maximum
regret

Minimax regret criterion: Choose the action
corresponding to the minimum of the
maximum regrets (i.e., the action that
produces the smallest possible opportunity
loss)
Ch. 18-13
Opportunity Loss Example
Opportunity loss (regret) is the difference between an
actual payoff for a decision and the optimal payoff for
that state of nature
Investment
Choice
(Alternatives)
Payoff
Table
Profit in \$1,000’s
(States of Nature)
Strong
Economy
Stable
Economy
Weak
Economy
Large factory
200
50
-120
Average factory
90
120
-30
Small factory
40
30
20
The choice “Average factory” has payoff 90 for “Strong Economy”. Given
“Strong Economy”, the choice of “Large factory” would have given a
payoff of 200, or 110 higher. Opportunity loss = 110 for this cell.
Ch. 18-14
Opportunity Loss
(continued)
Investment
Choice
(Alternatives)
Profit in \$1,000’s
(States of Nature)
Strong
Economy
Stable
Economy
Weak
Economy
Large factory
200
50
-120
Average factory
90
120
-30
Small factory
40
30
20
Opportunity
Loss Table
Investment
Choice
(Alternatives)
Large factory
Average factory
Small factory
Payoff
Table
Opportunity Loss in \$1,000’s
(States of Nature)
Strong
Economy
Stable
Economy
Weak
Economy
0
110
160
70
0
90
140
50
0
Ch. 18-15
Minimax Regret Example
The minimax regret criterion:
1. For each alternative, find the maximum opportunity
loss (or “regret”)
Opportunity Loss Table
Investment
Choice
(Alternatives)
Opportunity Loss in \$1,000’s
(States of Nature)
Strong
Economy
Stable
Economy
Weak
Economy
0
70
140
Average factory
110
0
50
Small factory
160
90
0
Large factory
1.
Maximum
Op. Loss
140
110
160
Ch. 18-16
Minimax Regret Example
(continued)
The minimax regret criterion:
1. For each alternative, find the maximum opportunity
loss (or “regret”)
2. Choose the option with the smallest maximum loss
Opportunity Loss Table
Investment
Choice
(Alternatives)
Opportunity Loss in \$1,000’s
(States of Nature)
Strong
Economy
Stable
Economy
Weak
Economy
0
70
140
Average factory
110
0
50
Small factory
160
90
0
Large factory
1.
2.
Maximum
Op. Loss
Smallest
maximum
loss is to
choose
Average
factory
140
110
160
Ch. 18-17
18.3
Decision Making Overview
Decision Criteria
No probabilities
known
Probabilities
are known
*
Probabilistic Decision Criteria:
Consider the probabilities of
uncertain events and select an
alternative to maximize the
expected payoff of minimize the
expected loss
 maximize expected monetary value
Ch. 18-18
Payoff Table

Form of a payoff table with probabilities

Each state of nature sj has an associated
probability Pi
States of nature
Actions
s1
(P1)
s2
(P2)
...
sH
(PH)
a1
a2
.
.
.
aK
M11
M21
.
.
.
MK1
M12
M22
.
.
.
MK2
...
...
.
.
.
...
M1H
M2H
.
.
.
MKH
Ch. 18-19
Expected Monetary Value (EMV)
Criterion


Consider possible actions a1, a2, . . ., aK and H states
of nature
Let Mij denote the payoff corresponding to the ith action
and jth state and Pj the probability of occurrence of the
H
jth state of nature with
P  1
j1

j
The expected monetary value of action ai is
H
EMV(ai )  P1Mi1  P2Mi2    PHMiH   PjMij
j1

The Expected Monetary Value Criterion: adopt the
action with the largest expected monetary value
Ch. 18-20
Expected Monetary
Value Example

The expected monetary value is the weighted
average payoff, given specified probabilities for
each state of nature
Profit in \$1,000’s
(States of Nature)
Investment
Choice
(Alternatives)
Strong
Economy
(.3)
Stable
Economy
(.5)
Weak
Economy
(.2)
Large factory
200
50
-120
Average factory
90
120
-30
Small factory
40
30
20
Suppose these
probabilities
have been
assessed for
these states of
nature
Ch. 18-21
Expected Monetary Value
Solution
(continued)
Goal: Maximize expected monetary value
Payoff Table:
Profit in \$1,000’s
(States of nature)
Investment
Choice
(Action)
Large factory
Average factory
Small factory
Strong
Economy
(.3)
Stable
Economy
(.5)
Weak
Economy
(.2)
200
90
40
50
120
30
-120
-30
20
Expected
Values
(EMV)
61
81
31
Maximize
expected
value by
choosing
Average
factory
Example: EMV (Average factory) = 90(.3) + 120(.5) + (-30)(.2)
= 81
Ch. 18-22
Decision Tree Analysis

A Decision tree shows a decision problem,
beginning with the initial decision and ending
will all possible outcomes and payoffs
Use a square to denote decision nodes
Use a circle to denote uncertain events
Ch. 18-23
(continued)
Large factory
Strong Economy (.3)
200
Stable Economy (.5)
50
Weak Economy
Average factory
Small factory
-120
Strong Economy (.3)
90
Stable Economy (.5)
120
Weak Economy
Decision
(.2)
(.2)
-30
Strong Economy (.3)
40
Stable Economy (.5)
30
Weak Economy
(.2)
20
States of nature
Probabilities Payoffs
Ch. 18-24
Fold Back the Tree
EMV=200(.3)+50(.5)+(-120)(.2)=61
Large factory
Strong Economy (.3)
200
Stable Economy (.5)
50
Weak Economy
EMV=90(.3)+120(.5)+(-30)(.2)=81
Average factory
Small factory
90
Stable Economy (.5)
120
(.2)
-30
Strong Economy (.3)
40
Stable Economy (.5)
30
Weak Economy
-120
Strong Economy (.3)
Weak Economy
EMV=40(.3)+30(.5)+20(.2)=31
(.2)
(.2)
20
Ch. 18-25
Make the Decision
EV=61
Large factory
Strong Economy (.3)
200
Stable Economy (.5)
50
Weak Economy
EV=81
Average factory
Strong Economy (.3)
Stable Economy (.5)
Weak Economy
EV=31
Small factory
(.2)
-120
90
Maximum
120
40
Stable Economy (.5)
30
(.2)
EMV=81
-30
Strong Economy (.3)
Weak Economy
(.2)
20
Ch. 18-26
Bayes’ Theorem
18.4



Let s1, s2, . . ., sH be H mutually exclusive and collectively
exhaustive events, corresponding to the H states of nature of a
decision problem
Let A be some other event. Denote the conditional probability that
si will occur, given that A occurs, by P(si|A) , and the probability
of A , given si , by P(A|si)
Bayes’ Theorem states that the conditional probability of si, given
A, can be expressed as
P(si | A) 

P(A | si )P(si )
P(A | si )P(si )

P(A)
P(A | s1 )P(s1 )  P(A | s2 )P(s2 )    P(A | sH )P(sH )
In the terminology of this section, P(si) is the prior probability of si
and is modified to the posterior probability, P(si|A), given the
sample information that event A has occurred
Ch. 18-27
Bayes’ Theorem Example
Consider the choice of Stock A vs. Stock B
Percent Return
(Events)
Stock Choice
(Action)
Strong
Economy
(.7)
Weak
Economy
(.3)
Stock A
30
-10
18.0
Stock B
14
8
12.2
Expected
Return:
Stock A has a
higher EMV
Ch. 18-28
Bayes’ Theorem Example
(continued)
Prior
Probability

Permits revising old
probabilities based on new
information
New
Information
Revised
Probability
Ch. 18-29
Bayes’ Theorem Example
(continued)
Additional Information: Economic forecast is strong economy
 When the economy was strong, the forecaster was correct
90% of the time.
 When the economy was weak, the forecaster was correct 70%
of the time.
F1 = strong forecast
F2 = weak forecast
E1 = strong economy = 0.70
Prior probabilities
from stock choice
example
E2 = weak economy = 0.30
P(F1 | E1) = 0.90
P(F1 | E2) = 0.30
Ch. 18-30
Bayes’ Theorem Example
(continued)
P(F1 | E1)  .9 , P(F1 | E2 )  .3
P(E1 )  .7 , P(E2 )  .3

Revised Probabilities (Bayes’ Theorem)
P(E1 )P(F1 | E1 )
(.7)(.9)
P(E1 | F1 ) 

 .875
P(F1 )
(.7)(.9)  (.3)(.3)
P(E2 )P(F1 | E2 )
P(E2 | F1 ) 
 .125
P(F1 )
Ch. 18-31
EMV with
Revised Probabilities
Pi
Event
Stock A
xijPi
Stock B
xijPi
.875
strong
30
26.25
14
12.25
.125
weak
-10
-1.25
8
1.00
Σ = 25.0
Revised
probabilities
Σ = 11.25
EMV Stock B = 11.25
EMV Stock A = 25.0
Maximum
EMV
Ch. 18-32
Expected Value of
Sample Information, EVSI
Suppose there are K possible actions and H
states of nature, s1, s2, . . ., sH
The decision-maker may obtain sample information.
Let there be M possible sample results,
A1, A2, . . . , AM
The expected value of sample information is
obtained as follows:





Determine which action will be chosen if only the prior
probabilities were used
Determine the probabilities of obtaining each sample
result:
P(A i )  P(A i | s1 )P(s1 )  P(A i | s2 )P(s2 )    P(A i | sH )P(sH )
Ch. 18-33
Expected Value of
Sample Information, EVSI
(continued)

For each possible sample result, Ai, find the
difference, Vi, between the expected monetary value
for the optimal action and that for the action chosen if
only the prior probabilities are used.

This is the value of the sample information, given that
Ai was observed
EVSI  P(A 1 )V1  P(A 2 )V2    P(A M )VM
Ch. 18-34
Expected Value of
Perfect Information, EVPI
Perfect information corresponds to knowledge of which
state of nature will arise

To determine the expected value of perfect
information:



Determine which action will be chosen if only the prior
probabilities P(s1), P(s2), . . ., P(sH) are used
For each possible state of nature, si, find the difference,
Wi, between the payoff for the best choice of action, if it
were known that state would arise, and the payoff for
the action chosen if only prior probabilities are used
This is the value of perfect information, when it is known
that si will occur
Ch. 18-35
Expected Value of
Perfect Information, EVPI
(continued)

The expected value of perfect information (EVPI) is
EVPI  P(s1 )W1  P(s2 )W2    P(sH )WH
 Another way to view the expected value of perfect
information
Expected Value of Perfect Information
EVPI = Expected monetary value under certainty
– expected monetary value of the best alternative
Ch. 18-36
Expected Value Under Certainty

Expected
value under
certainty
= expected
value of the
best
decision,
given perfect
information
Profit in \$1,000’s
(Events)
Investment
Choice
(Action)
Strong
Economy
(.3)
Stable
Economy
(.5)
Weak
Economy
(.2)
200
90
40
50
120
30
-120
-30
20
Value of best decision
200
for each event:
120
20
Large factory
Average factory
Small factory
Example: Best decision
given “Strong Economy” is
“Large factory”
Ch. 18-37
Expected Value Under Certainty
(continued)
Profit in \$1,000’s
(Events)
Investment
Choice
(Action)

Now weight
these outcomes
with their
probabilities to
find the
expected value:
Large factory
Average factory
Small factory
Strong
Economy
(.3)
Stable
Economy
(.5)
Weak
Economy
(.2)
200
90
40
50
120
30
-120
-30
20
200
120
20
200(.3)+120(.5)+20(.2)
= 124
Expected
value under
certainty
Ch. 18-38
Expected Value of
Perfect Information
Expected Value of Perfect Information (EVPI)
EVPI = Expected profit under certainty
– Expected monetary value of the best decision
Recall:
Expected profit under certainty = 124
EMV is maximized by choosing “Average factory”,
where EMV = 81
so:
EVPI = 124 – 81
= 43
(EVPI is the maximum you would be willing to spend to obtain
perfect information)
Ch. 18-39
18.5

Utility Analysis
Utility is the pleasure or satisfaction
obtained from an action

The utility of an outcome may not be the same for
each individual

Utility units are arbitrary
Ch. 18-40
Utility Analysis
(continued)

Example: each incremental \$1 of profit does not
have the same value to every individual:

A risk averse person, once reaching a goal,
assigns less utility to each incremental \$1

A risk seeker assigns more utility to each
incremental \$1

A risk neutral person assigns the same utility to
each extra \$1
Ch. 18-41
Three Types of Utility Curves
\$
Risk Aversion
\$
Risk Seeker
\$
Risk-Neutral
Ch. 18-42
Maximizing Expected Utility

Making decisions in terms of utility, not \$



Translate \$ outcomes into utility outcomes
Calculate expected utilities for each action
Choose the action to maximize expected utility
Ch. 18-43
The Expected Utility Criterion



Consider K possible actions, a1, a2, . . ., aK and H states
of nature.
Let Uij denote the utility corresponding to the ith action and
jth state and Pj the probability of occurrence of the jth state
of nature
Then the expected utility, EU(ai), of the action ai is
H
EU(ai )  P1Ui1  P2Ui2    PHUiH   PjUij
j1

The expected utility criterion: choose the action to maximize
expected utility

If the decision-maker is indifferent to risk, the expected utility
criterion and expected monetary value criterion are equivalent
Ch. 18-44
Chapter Summary



Described the payoff table and decision trees
Defined opportunity loss (regret)
Provided criteria for decision making





If no probabilities are known: maximin, minimax regret
When probabilities are known: expected monetary value
Introduced expected profit under certainty and the
value of perfect information
Discussed decision making with sample
information and Bayes’ theorem