Virtual Session #2 Track 2: Better Quality Through

Report
Virtual Session #2 Track 2:
Better Quality Through Better
Measurement Part 1
Robert Lloyd, PhD, Dave Williams, PhD
Rebecca Steinfield, MA, Kim Werkmeister, Steve Tremain, MD
Wednesday, September 11, 2013
1
Session #2 Faculty
Robert Lloyd, PhD, IHI
Dave Williams, PhD, IHI
Rebecca Steinfield, MA, IHI
Kim Werkmeister, RN, Cynosure
Steve Tremain, MD, Cynosure
2
Robert Lloyd
Robert Lloyd, PhD is Executive Director of Performance Improvement for the
Institute for Healthcare Improvement (IHI). Dr. Lloyd provides leadership in
the areas of performance improvement strategies, statistical process control
methods, development of strategic dashboards and building capacity and
capability for quality improvement. He also serves as lead faculty for various
IHI initiatives and demonstration projects in the US, the UK, Sweden,
Denmark, New Zealand and Africa. Before joining the IHI, Dr. Lloyd served as
the Corporate Director of Quality Resource Services for Advocate Health Care
(Oak Brook, IL). He also served as Senior Director of Quality Measurement for
Lutheran General Health System (Park Ridge, IL), directed the American
Hospital Association's Quality Measurement and Management Project
(QMMP) and served in various leadership roles at the Hospital Association of
Pennsylvania. The Pennsylvania State University awarded all three of Dr.
Lloyd’s degrees. His doctorate is in agricultural economics and rural sociology.
Dr. Lloyd has written many articles and chapters in books. He is also the coauthor of the internationally acclaimed book, Measuring Quality Improvement
in Healthcare: A Guide to Statistical Process Control Applications (American
Society for Quality Press, 2001, 5th printing) and the author of Quality Health
Care: A Guide to Developing and Using Indicators, 2004 by Jones and Bartlett
(Sudbury, MA).
Dave Williams
David M. Williams, PhD, Improvement Advisor, truesimple
Consulting, is also on the teaching faculty of The George
Washington University School of Medicine and Health Sciences.
He is also a Six Sigma Black Belt and serves as faculty and an
Improvement Advisor (IA) for the Institute for Healthcare
Improvement, supporting teams through leading system changes
using the Model for Improvement. Dr. Williams started his career
as an urban street paramedic. For the last decade, he has acted as
an internal and external IA to governmental agencies, hospitals,
and for-profit and not-for-profit organizations. He works with
clients to improve their organizations by enabling appreciation of
systems, understanding of data and variation, testing changes,
and recognizing the influence of psychology. He has published
nearly 100 articles, led intensive workshops, presented at major
conferences to thousands of attendees, and writes a popular blog.
Rebecca Steinfield
Rebecca Steinfield earned her MA in Applied
Psychology at Boston University and has been
with IHI since 1996 in numerous capacities.
She currently serves as an Improvement
Advisor teaching courses and coaching teams
on the Science of Improvement. Rebecca
serves on the faculty for IHI's work on
Improving Transitions from Hospital to Home,
Kaiser Permanente's Performance
Improvement Institute, and the Harvard
Medical School Center for Primary Care
Academic Innovations Collaborative.
Kim Werkmeister
Kim Werkmeister, RN, BA, CPHQ is a National Improvement Advisor
working with the American Hospital Association / Health Research
Educational Trust Hospital Engagement Network (HEN). As an
Improvement Advisor, she is responsible for working directly with State
Hospital Associations to drive improvement in hospital acquired
conditions and patient harm in hospitals across the country. In addition,
she is the lead Improvement Advisor for the Hospital Engagement
Network for the Perinatal Harm initiative, the VTE Reduction initiative and
the Psychiatric Affinity Group initiative.
Prior to this, Ms. Werkmeister worked with hospitals across California to
improve patient outcomes, implement best practices, set up Quality/Risk
programs, and prepare for and respond to licensing and accreditation
activities. She served as an improvement advisor for both the California
Public Hospital Improvement Collaborative and the California Partnership
for Health/Patient Safety First Collaborative. She also served as the lead
improvement advisor for the ICU Mortality Reduction Collaborative, a
project focusing on reduction of mortality and morbidity in ICU care
across the state of California.
Ms. Werkmeister is a Registered Nurse and graduated with a Bachelor of
Arts degree from California State University Fullerton.
6
Steve Tremain
Steve Tremain, MD, has championed quality and safety in various leadership positions for 30
years. He served as both Chief Medical Officer and Chief Medical Information Officer for IHI
mentor hospital Contra Costa Regional Medical Center and Health Centers in California.
Under his executive sponsorship, Contra Costa achieved mentor status for its work in
Medication Reconciliation, VAP, and Rapid Response Teams. Since then, Dr . Tremain has co-led
several improvement collaboratives. He was the sole recipient of the National Association of
Public Hospitals Patient Safety Award for 2007.
In 2012 Dr. Tremain joined the HEN initiative as one of just two Physician Improvement
Advisors. His focus areas are Adverse Drug Events, Culture of Safety, and Messaging. He also
supports VTE and OB Adverse Event prevention. He has worked on site in several states and
presented plenaries and workshops for the HRET-HEN national and regional meetings.
Dr. Tremain has served as guest faculty for the IHI on several topics including physician
engagement and whole system change. He currently serves as physician faculty for the
Physician Communication Pilot of IHI-VHA West Coast Patient Experience Collaborative.
He is a graduate of the UCLA School of Medicine, he is board certified in Family Medicine and is
a Certified Physician Executive.
7
Chat Time!
Please type the name of your organization
and the number of people sitting in on the call
with you into the Chat Box.
Example: “Institute for Healthcare Improvement- 2
Remember to chat to ALL PARTICIPANTS!
Where are you today?
9
Reflections on
Virtual Session #1 August 21, 2013
• How have you used your understanding
of the human side of change
(motivation, resistance to change,
adoption of new ideas) in the past few
weeks to help with your improvement
efforts?
• What help do you need, or advice might
you offer your colleagues?
10
General Objectives
for Virtual Session #2
• To revisit Question 2 in the Model for Improvement (How will you know
that a change is an improvement?)
• To be clear on why you are measuring? (Improvement, Judgment or
Research)
• To understand the differences between types of measures.
• To identify the key elements of a data collection plan.
• To build increased knowledge and comfort with understanding variation
statistically with run and control charts.
• Have teams report out on their progress, successes and barriers
11
Session #2 Prework
(Track 1 & 2)
IHI Open School Modules
QI 103 Lesson 1: Measurement Fundamentals
QI 103 Lesson 2: Displaying Data
IHI On Demand video by Dr. R. Lloyd
Building Skills in Data Collection and Understanding
Variation
http://www.ihi.org/offerings/virtualprograms/ondemand/
datacollection_variation/Pages/default.aspx
12
The Lens of Profound Knowledge
Appreciation
of a system
“The system of profound
knowledge provides a lens.
It provides a new map of
theory by which to
understand and optimize
our organizations.”
(Deming, Out of the Crisis)
It provides an
opportunity for
dialogue and
learning!
Theory
of
Knowledge
QI
Human
Behavior
Understanding
Variation
13
A Model for Learning and Change
When you
combine the 3
questions with
the…
Our focus today
…the Model for
Improvement.
PDSA cycle,
you get…
The Improvement Guide, API, 2009
14
How will we know that a change
is an improvement?
1. By understanding the variation that lives
within your data
2. By making good management decisions on
this variation (i.e., don’t overreact to a
special cause and don’t think that random
movement of your data up and down is a
signal of improvement).
QI 103: Lessons 1 and 2
Content
QI 103 : Lesson #1 Topics (Measurement Fundamentals)
1. Project-level measures versus PDSA-level measures
2. Why are you measuring?
3. Key elements of a data collection plan
Questions?
4. Building a families of measures (FOM).
Concerns?
Observations?
QI 103 Lesson #2 Topics (Displaying Data)
1. The value of plotting data over time
2. The basic elements of a run chart
3. Elements of a run chart and how to interpret a run chart
4. Detecting random and nonrandom patterns in the data
5. The difference between common cause and special cause variation
6. The difference between a run chart and a Shewhart chart
16
QI 103 Tips for Building
Effective Measurement Systems
Measurement for improvement should be used to
speed things up, not slow them down. The following
tips are meant to help teams use measurement to
accelerate improvement.
1. Plot data over time.
2. Seek usefulness, not perfection.
3. Use sampling appropriately.
4. Integrate measurement into the daily routine.
5. Use qualitative and quantitative data.
17
QI 103: Lessons 1 and 2
Content for Session #2
QI 103 : Lesson #1 Topics (Measurement Fundamentals)
1. Project-level measures versus PDSA-level measures
2. Why are you measuring?
3. Key elements of a data collection plan
4. Building a families of measures (FOM).
QI 103 Lesson #2 Topics (Displaying Data)
1. The value of plotting data over time
2. The basic elements of a run chart
3. Elements of a run chart and how to interpret a run chart
4. Detecting random and nonrandom patterns in the data
5. The difference between common cause and special cause variation
6. The difference between a run chart and a Shewhart chart
18
Why are you measuring?
Improvement?
The answer to this question will guide your entire quality
measurement journey!
19
The Three Faces of Performance Measurement
by
Lief Solberg, Gordon Mosser and Sharon McDonald
Journal on Quality Improvement vol. 23, no. 3, (March 1997), 135-147.
Aspect
Aim
Methods:
• Test Observability
• Bias
Improvement
Accountability
Research
Improvement of care
(efficiency &
effectiveness)
Comparison, choice,
reassurance, motivation
for change
New knowledge
(efficacy)
Test observable
No test, evaluate current
performance
Test blinded or controlled
Accept consistent bias
Measure and adjust to
reduce bias
Design to eliminate bias
• Sample Size
“Just enough” data, small
sequential samples
Obtain 100% of available,
relevant data
“Just in case” data
• Flexibility of
Hypothesis
Flexible hypotheses,
changes as learning takes
place
No hypothesis
Fixed hypothesis
(null hypothesis)
• Testing Strategy
Sequential tests
No tests
One large test
• Determining if a
change is an
improvement
Run charts or Shewhart
control charts
(statistical process control)
No change focus
(maybe compute a percent
change or rank order the
results)
Hypothesis, statistical
tests (t-test, F-test,
chi square),
p-values
• Confidentiality of
the data
Data used only by those
involved with improvement
Data available for public
consumption and review
Research subjects’
identities protected
Integrating the Three Faces of
Performance Measurement
21
The three faces of performance
measurement should not be seen as
mutually exclusive silos. This is not
an either/or situation.
All three areas must be understood as
a system. Individuals need to build
skills in all three areas.
Organizations need translators who
and be able to speak the language of
each approach.
The problem is that individuals identify
with one of the approaches and
dismiss the value of the other two.
Poll #1
Why are you measuring?
Question
Response Options
1. In my organization we claim to be measuring for
improvement but most of our time is actually spent
measuring for judgment.
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
2. In my organization there are very few people
conducting classic research studies.
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
3. In my organization we typically use bar graphs, pie
charts and aggregated summary statistics (e.g., the
ALOS for the month) to display our data.
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
4. In my organization we collect data and measure
things all the time but I am not sure what happens to
the data once we submit it for tabulation.
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
If you responded Disagree or Strongly Disagree to any question, what criteria are
used by this group to determine if data or improving or getting worse?
22
Let’s hear from a team working
on VTEs
How has your Quality Measurement Journey
been going?
•
Have you hit roadblocks?
•
If so, what are they?
•
Do you feel you are measuring for improvement,
judgment or research?
•
Are you collecting too much data? Not enough?
•
Are you helping your teams use their data to become
more efficient and effective?
ILF Team Presentation #1
Pursuing Perfection for VTE
Jane Northcutt, RN
Chief Quality Officer
Hospital Overview
• Located in Birmingham, Alabama
• One of 15 General Acute Care Hospital Providers
in the Birmingham MSA (4-County)
• Licensed Beds - 534
– 17 Rehabilitation Beds
– 64 Psychiatric Beds
•
•
•
•
Employees - 1,686
Physicians & Allied Health on Staff: 698
Population for Primary Service Area - 437,957
Population for Secondary Service Area - 608,771
VTE - HEN Specific Run Charts
Project Title: Reducing VTE by Improved compliance with VTE Measures
Date: 8/10/2013
Hospital Name: Trinity Medical Center
State: Alabama
Self Assessment Score = _5__
Lessons Learned
Aim Statement
Aim?:
By December 2013, 100%
of patients will receive VTE
prophylaxis by defined protocols,
patient assessments or have
documentation of contraindications.
Run Charts
Medical VTE Prophylaxis
Important?
VTE is the #1
preventable cause of death in
hospitalized patients.
•Build in discharge documentation tools in
electronic system to populate on all patients.
•Identify patients from radiological studies for
identification of needed overlap therapy and have
PharmD review for appropriate coverage.
•Set required timeline actions before the timeline
actually ends.
•Order set with physician to design with physician
driven education to medical staff.
•Use PI Referral Form for staff involved to
document why variance occurred and lesson
learned.
Changes Being Tested,
Implemented or Spread
Recommendations and
Next Steps
•Assessment of patients for VTE
prophylaxis is completed for
Inpatients and ICU patients.
•Protocols are implemented timely.
•VTE Discharge Instructions are
completed.
•Patients requiring overlap therapy are
identified with measures
implemented.
•Surgery patients are assessed for
appropriate prophylaxis.
•Re-assess protocols and current actions
with identified variances.
•Evaluate effect of new electronic clinical
documentation on compliance.
Team Members
© 2012 Institute for Healthcare Improvement
CQO
Core Measure Analyst
PharmD
Physician Liaison
Director of Surgical Services
CNO
ICU Nursing Director
Med-Surge Nursing Director
The Quality Measurement Journey
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett Publishers, 2004.
AIM
(How good? By when?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
28
The Quality Measurement Journey
Source: R. Lloyd. Quality Health Care: A Guide to Developing and Using Indicators. Jones and Bartlett Publishers, 2004.
AIM
(How good? By when?)
Concept
Measure
Operational Definitions
Data Collection Plan
Data Collection
Analysis
ACTION
29
Dr. Walter A Shewhart
Shewhart - Economic Control of
Quality of Manufactured Product,
1931
“A phenomenon will
be said to be
controlled when,
through the use of
past experience, we
can predict, at least
within limits, how
the phenomenon
may be expected to
vary in the future.”
“What is the variation in one system over time?”
Walter A. Shewhart - early 1920’s, Bell Laboratories
Dynamic View
UCL
Static View
time
LCL
Every process displays variation:
• Controlled variation
stable, consistent pattern of variation
“chance,” constant causes
Static View
• Special cause variation
“assignable”
pattern changes over time
Types of Variation
Common Cause Variation
Special Cause Variation
• Is inherent in the design of the
process
• Is due to irregular or unnatural
causes that are not inherent in
the design of the process
• Is due to regular, natural or
ordinary causes
• Affects all the outcomes of a
process
• Results in a “stable” process that is
predictable
• Also known as random or
unassignable variation
• Affect some, but not necessarily
all aspects of the process
• Results in an “unstable” process
that is not predictable
• Also known as non-random or
assignable variation
32
Point …
Common Cause does not mean “Good Variation.” It only
means that the process is stable and predictable. For
example, if a patient’s systolic blood pressure averaged around
165 and was usually between 160 and 170 mmHg, this might be
stable and predictable but it is unacceptable against the goal.
Similarly Special Cause variation should not be viewed as “Bad
Variation.” You could have a special cause that represents a
very good result (e.g., a low turnaround time), which you would
want to emulate. Special Cause merely means that the process
is unstable and unpredictable.
You have to decide if the output of the
process is acceptable!
33
There are many examples of Common and Special Causes of
Variation in healthcare. Find ones that work for your you.
Common Cause Variation
Special Cause Variation
P er cent of C esar ean S ections P er for m ed D ec 95 - Jun 99
35. 0
30. 0
UCL=27. 7018
20. 0
CL=18. 0246
15. 0
10. 0
LCL=8. 3473
5. 0
6/ 99
4/ 99
2/ 99
12/ 98
8/ 98
10/ 98
6/ 98
4/ 98
2/ 98
12/ 97
8/ 97
10/ 97
6/ 97
4/ 97
2/ 97
12/ 96
8/ 96
10/ 96
6/ 96
4/ 96
2/ 96
0. 0
12/ 95
Per c ent C- sect io ns
25. 0
Num ber of M edic at io ns Er r or s per 1000 Pat ie nt Day s
22. 5
M e dic ation E r r or R ate
20. 0
17. 5
15. 0
UCL=13. 39461
12. 5
10. 0
7. 5
5. 0
CL=4. 42048
2. 5
0. 0
m ont h
Normal Sinus Rhythm (a.k.a.
Common Cause Variation)
LCL=0. 00000
Week
Atrial Flutter Rhythm (a.k.a.
Special Cause Variation)
Appropriate Management Response to
Common & Special Causes of Variation
Is the process stable?
YES
Type of variation
Right Choice
Wrong Choice
Consequences of
making the wrong
choice
Only Common
NO
Special + Common
Change the process if
unacceptable
Investigate the origin of the
special cause
Treat normal variation as a
special cause (tampering)
Change
the process
Increased
variation!
Wasted
resources!
35
Attributes of a Leader Who
Understands Variation
Leaders understand the different ways that variation is viewed.
They explain changes in terms of common causes and special
causes.
They use graphical methods to learn from data and expect
others to consider variation in their decisions and actions.
They understand the concept of stable and unstable processes
and the potential losses due to tampering.
Capability of a process or system is understood before changes
are attempted.
Poll #2
Attributes of a Leader Who
Understands Variation
Question
1. The Board evaluates our data using
criteria for common and special cause
variation
Response Options
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
2. Senior Management evaluates our data Strongly Agree
Agree
using criteria for common and special
Not Sure
cause variation
Disagree
Strongly Disagree
3. Front-line Managers evaluate our data
using criteria for common and special
cause variation
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
4. Staff Members evaluate our data using
criteria for common and special cause
variation
Strongly Agree
Agree
Not Sure
Disagree
Strongly Disagree
If you responded Disagree or Strongly Disagree to any question, what criteria are
used by this group to determine if data or improving or getting worse?
37
How can I depict variation?
Unplanned Returns to Ed w/in 72 Hours
Month
M
A
M
J
J
A
S
O
N
D
J
F
M
A
M
J
J
A
S
ED/100 41.78 43.89 39.86 40.03 38.01 43.43 39.21 41.90 41.78 43.00 39.66 40.03 48.21 43.89 39.86 36.21 41.78 43.89 31.45
Returns
17
26
13
16
24
27
19
14
33
20
17
22
29
17
36
19
22
24
22
u chart
1.2
1.0
Rate per 100 ED Patients
UCL = 0.88
0.8
0.6
Mean = 0.54
0.4
0.2
LCL = 0.19
19
18
17
16
15
14
13
9
12
11
8
10
7
6
5
4
3
2
1
0.0
STATIC VIEW
DYNAMIC VIEW
Descriptive Statistics
Mean, Median & Mode
Minimum/Maximum/Range
Standard Deviation
Bar graphs/Pie charts
Run Chart
Control Chart
(plot data over time)
Statistical Process Control (SPC)
38
How do we analyze variation
for quality improvement?
Run and Control Charts are
the best tools to determine
if our improvement
strategies have had the
desired effect.
39
1. Make process performance visible
Three Primary
Uses of
SPC Charts
C urrent P roc es s P erform anc e: Is olated F em ur F rac tures
10 0 0
800
P atient
M inutes E D to O R per
12 0 0
600
400
200
0
1
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64
S equential P atients
Holding the G ain: Is olated F em ur F rac tures
800
P atient
M inutes E D to O R per
1000
600
1000
800
P at ient
P roc es s Im provem ent: Is olated F em ur F rac tures
1200
M inut es E D t o O R per
1200
600
400
200
0
1
400
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64
S equential P atients
200
0
1
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64
S equential P atients
2. Determine if a change is an
improvement
3. Determine if we are
holding the gains
Elements of a Run Chart
6.00
The centerline (CL) on a
Run Chart is the Median
5.75
5.25
P o u n d s o f R e d B a g W a ste
Measure
5.50
5.00
4.75
Median=4.610
4.50
~X (CL)
4.25
4.00
3.75
3.50
3.25
1
2
3
4
5
6
7
Time
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Point Number
Four simple run rules are used to determine if non-random variation is present
41
Non-Random Rules for Run Charts
42
Source: The Data Guide by L. Provost and S. Murray, Austin, Texas, February, 2007: p3-10.
A Shift:
6 or more
Too many or
too few runs
A Trend
5 or more
An astronomical data
point
Elements of a Control
(Shewhart) Chart
50.0
UCL=44.855
45.0
A
An indication of a
special cause
40.0
B
(Upper Control
Limit)
N u m b e r o f C o m p lain ts
Measure
35.0
C
30.0
CL=29.250
C
25.0
X (Mean)
B
20.0
A
15.0
LCL=13.645
(Lower Control
Limit)
10.0
5.0
Jan01
Mar01
May01
July01
Time
Sept01
Nov01
Jan02
Mar02
May02
July02
Sept02
Nov02
Month
43
Six consecutive points increasing (trend up) or
decreasing (trend down)
Rules for Detecting
Special Causes on
Shewhart Charts
3.
Two our of three consecutive points near a control
limit (outer one-third)
A single point outside the control limits
1.
4.
Eight or more consecutive points above or below
the centerline
Fifteen consecutive points close to the centerline
(inner one-third)
2.
5.
44
Notes on Special Cause Rules
Rule #1: 1 point outside the +/- 3 sigma limits
A point exactly on a control limit is not considered outside the limit . When there is
not a lower or upper control limit Rule 1 does not apply to the side missing the limit.
Rule #2: 8 successive consecutive points above (or below) the centerline
A point exactly on the centerline does not cancel or count towards a shift.
Rule #3: 6 or more consecutive points steadily increasing or decreasing
Ties between two consecutive points do not cancel or add to a trend. When control
charts have varying limits due to varying numbers of measurements within subgroups,
then rule #3 should not be applied.
Rule #4: 2 out of 3 successive points in Zone A or beyond
When there is not a lower or upper control limit Rule 4 does not apply to the side
missing a limit.
Rule #5: 15 consecutive points in Zone C on either side of the centerline
This is known as “hugging the centerline”
So Which Is More Useful…
a Run Chart or a Shewhart Chart?
What do you think?
Enter your thoughts into the
Webex Chat Area
46
So Which Is More Useful…
a Run Chart or a Shewhart Chart?
For many improvement projects, an annotated run chart is all that
is needed to learn whether the changes made have led to an
improvement in the measure of interest. If the changes have led
to a new level of system performance, the run chart will provide
the evidence required to claim an improvement.
In some cases, however, the run chart is not clear and a Shewhart
chart can be useful in clarifying if a meaningful change has
occurred. The limits on the Shewhart chart allow the detection of
a special cause with just one data point. While the rules on the
run chart are probability based, the limits and rules associated
with Shewhart charts are economical and empirically based.
Source: IHI Open School QI 103 Lesson 2 page 9
47
So Which Is More Useful…
a Run Chart or a Shewhart Chart?
Shewhart charts have the added feature of control limits, which allow us
to determine if the process is stable (common cause variation) or not
stable (special cause variation). That way we can tell if there’s some
unusual factor(s) making the data fluctuate in a non-random manner.
For this reason, Shewhart charts also allow us to more accurately predict
the future performance of the process.
Recommendations
• Start off using a line chart (just plot the dots!)
• Then move to a run chart when you have about 10-12 data points
• If the run chart does not tell a clear story, consider moving to a
Shewhart chart
• Remember you should have 15-20 data points to make a Shewhart
chart.
Source: IHI Open School QI 103 Lesson 2 page 9
48
So, why are Control Charts preferred
over Run Charts?
Because Control Charts…
1.
Are more sensitive than run charts
•
•
A run chart cannot detect special causes that are due to
point-to-point variation (median versus the mean)
Tests for detecting special causes can be used with control
charts
2.
Have the added feature of control limits, and zones
which allow us to determine if the process is stable
(common cause variation) or not stable (special cause
variation).
3.
Can be used to define process capability.
4.
Allow us to more accurately predict process behavior and
future performance.
Let’s hear from a team working
on ADEs
How has your Quality Measurement Journey
been going?
•
Have you hit roadblocks?
•
If so, what are they?
•
Have you placed all your measures on run or control
charts?
•
If not why is this not being done?
•
Do you use common and special cause criteria to
understand the variation in your data?
ILF Team Presentation #2
Josh Holland, PharmD.
Clinical Psychiatric Pharmacist
About Us
CoxHealth
•Springfield (Cox North/Cox
South/Meyer Orthopedic and Rehab)
• Monett
•Branson
•Four campus system in Southwest
MO
•Over 800 beds
Our Facility
Cox North
•Inpatient Psychiatric
•Four units/approx 20 beds each
•Pediatric, Geriatric, two adult units
•Level II Trauma Center (ER)
•Majority of patients admitted
through the ER
Column1 Column2
Column3
Column
4
Column5 Column6
Column7
Column8 Column9 Column10 Column11
Month/Year
Date
Totals
#
Station Physician Total Wrong Wrong Wrong No longer Drug Right/ Accuracy Patient
#
Meds Drug Strength Frequency Takes Missed Wrong Rate
Project Title: Preventing Psychiatric Adverse Drug Events
Date:8/31/13 Hospital Name: CoxHealth State: MO
Self Assessment Score 1-5 = 4
Run Charts
Aim Statement
Aim:
Decrease Medication reconciliation error rate
from paper to electronic to less than 5% by
December 31, 2013 .
Median
95%
65%
90%
63%
85%
61%
80%
59%
75%
60%
30%
25%
20%
15%
10%
5%
0%
20%
0%
20
20
20
20
20
20
20
20
20
# Patients with
Errors
12
10
6
6
40% 20% 15% 25%
4
4
5
Number Of
Errors
5
20% 5% 1
15% 20% 15%
Median
Baseline
Jul-13
Recommendations and Next Steps
# Patients
15
5
Percent
Jun-13
Medication Transcription Errors from Paper to Electronic
20
Jul-13
Jul
-13
Ju
n…
Ma
y…
Ap
r…
Ma
r…
Fe
b…
Ja
n…
De
c…
No
v…
Oc
t…
Se
p…
Au
g…
Jul
-12
No reason
found for
decrease will
monitor for
trend
25
0
•Education and improvement of documentation
decreases ADEs
•Improving medication reconciliation helps to avoid ADEs
in the hospital
•Call patient’s pharmacy for Medication List more
accurate and takes less time.
Ju
n…
Ap
r…
Lessons Learned
Temporary staff
40%
5
Median
Jun-13
Jul-13
Jun-13
May-13
Apr-13
Mar-13
Feb-13
Jan-13
Dec-12
Nov-12
Oct-12
Sep-12
Aug-12
Jul-12
Jun-12
May-12
Apr-12
100%
10
Percent
Adequate Accuracy on Medication
Reconciliation
Percent
on Admission
35%
Cox Health Discharge Medication
Reconciliation
80%
S: Measure error rate when transferring
medication reconciliation from paper to
electronic on admission.
S: For medication education and engagement,
started using medication consults as well as
medication groups to try and increase patient
knowledge and compliance. Use patient
activation scores to guide approach to patient
counseling.
T: New process to improve accuracy of
medication reconciliation. Call patient’s
pharmacy/ies for accurate medication list and
dose. New measurement run charts.
Baseline
57%
Mar-12
70%
Feb-12
Changes being Tested,
Implemented or Spread
100%
Jan-12
Why is this project important?
ADEs are a common cause of hospital harm.
Studies attribute 42-60 percent to ADEs. At
CoxHealth we want to ensure patient safety
with accuracy.
Accuracy of Medication on MR Form
per
67%
Patient Pharmacy
CoxHealth Admission Medication Reconciliation
1
5%
Evaluate how reduction occurred and
duplicate reduction in potential harm rate
from medication transcription errors from
paper to electronic.
Potential
Harm Rate
Potential harm 40% Oct’12 to 5% July’13
© 2012 Institute for Healthcare Improvement
Wordle: Measurement
Challenges and Concerns
Virtual Session #3 Pre-work
56
Sept. 25, 2013
• IHI Open School QI 103: Lesson 3: Learning from Measures
• Benneyan, J, Lloyd, R and Plsek, P. “Statistical Process Control as
a Tool for Research and Health Care Improvement” Journal of
Quality and Safety in Healthcare. British Medical Journal, Spring
2003.
IHI On Demand video, “Using Run and Control Charts to
Understand Variation”
http://www.ihi.org/offerings/VirtualPrograms/OnDemand/Run_C
ontrolCharts/Pages/default.aspx
Session #3 Objectives
57
Sept. 25, 2013
The focus in Track Two will be on:
• Understanding probability and non-probability sampling
strategies, the advantages and disadvantages of each and when it
is appropriate to use them
• Understanding how stratification and rational subgroups which
can lead to a deeper understanding of the variation that lives in
your data
• Construction and interpretation of Shewhart control charts,
including charts for rare events
• Applying these measurement principles to VTEs and ADEs
In summary, measurement is central
to a team’s ability to improve
• The purpose of measurement in QI work is for learning not judgment!
• All measures have limitations, but the limitations do not negate their
value for learning.
• You need a balanced set of measures reported daily, weekly or monthly
to determine if the process has improved, stayed the same or become
worse.
• These measures should be linked to the team’s Aim.
• Measures should be used to guide improvement and test changes.
• Measures should be integrated into the team’s daily routine.
• Data should be plotted over time on annotate graphs.
• Focus on the Vital Few!
The Sequence of Improvement
requires Measurement
Make part of
routine
operations
Test under a
variety of
conditions
Theory and
Prediction
Developing a
change
Testing a change
Sustaining improvements and
Spreading changes to other
locations
Implementing a
change
Act
Plan
Study
Do
59
60
Thanks for joining us today.
Please join us again on
September 25
1:00 – 3:00 PM CT
Take care…
Bob, Dave, Rebecca, Kim and Steve

similar documents