Criticality Analysis & Risk Assessment

Report
CARA Process Methodology
1
 Introduction
 Objectives
 Testing
 CARA
 Risk
Intake Process
Scoring
Analysis and Testing Scope Report
 Challenges
2
Paul Shovlin
Checkpoint Technologies
Director of Professional Services
IV&V Functional Test Manager, Department of
Veterans Affairs
3
4
 Understand
the Testing Intake Assessment (TIA)
process.
 Provide
knowledge and understanding of Criticality
Analysis and Risk Assessment (CARA) principles
and practices.
 Understand
the purpose and content of the Risk
Analysis and Testing Scope Report (RATSR).
 Share
our challenges.
5
 September
2010 mandate that all application release cycles
complete from development to Initial Operating Capability
(IOC) within 6 months, not leaving much time if any for
IV&V testing.
 150+ Major application project releases each calendar year.
 New approach to IV&V needed.
 CARA was being used successfully at NASA.
6
Testing Analyst
Project Manager
Perform Intake Assessment
1
Submit Intake
Assessment Form
2
Perform IDR
5
Update Project
Schedule
3
Perform CARA
Analysis
4
Create, Review &
Deliver RATSR
6
Submit Testing
Workload
Forecast
7
Update Testing
Master Workload
Schedule
7
Testing Intake
Assessment
Form
CONOPS
Requirements
System Design
CARA
Worksheet
Assign CARA
Resources
Submit
Testing Intake
Assessment
Read, Review
Docs &
Formulate
Questions
SMEs Answer
Questions
Perform
CARA
Determine
Services
Create &
Review
RATSR
Prepare
CARA
Worksheet
RATSR
Updated
CARA
Worksheet
Project
Management Plan
Project Schedule
Update
Project
Schedule
Update
Testing
Workload
Forecast
Testing
Workload
Form
8

Risk is the likelihood of failure.

Criticality is the measure of potential impact of failure.

Risk-Based Testing is a type of software testing that prioritizes the tests of features
and functions based on their criticality and likelihood of failure. Risk-based testing
determines which test activities will be completed for the iteration. It is a
transparent testing methodology that tells the customer exactly which test activities
are executed for every feature.
9
 Criticality


Analysis and Risk Assessment
Standardized risk assessment methodology.
Features are consistent with PMI’s Risk Management
assessment philosophy, PMBOK® Guide.
Probability
IMPACT
Low
Moderate
High
Catastrophic
Critical
Moderate
Low


The uniqueness is in the process of determining the impact
and probability.
Standardized criteria for determining risk values.
 CARA
does not tell us how to test.
10
For each Software Function:
Set IV&V Analysis Level (IAL) Thresholds
IAL
CARA Score
Minimal:
1 < CARA < 2
Limited (L):
2 < CARA < 5
Focused (F):
5 < CARA < 8
Comprehensive (C): 8 < CARA < 12
Criticality:
Category
Performance & Operations
Safety
Cost/Schedule
Rating
Catastrophic = 4
Critical = 3
Moderate = 2
Low = 1
Criticality Score
CARA Score
Risk:
Category
Rating
Complexity
Technology Maturity
Requirements Definition
& Stability
High = 3
Moderate = 2
Low = 1
Risk Score
Testability
System Characterization
11
 Criticality



is broken down into 3 categories:
Performance & Operations
Safety
Cost of Failure/Impact to Schedule
 Requirements
for each category are scored on a scale
of 1-4




1 = Lowest
2 = Moderate
3 = Critical
4 = Catastrophic
12
 Risk





is broken down into 5 categories:
Complexity
Technology Maturity
Requirements Definition and Stability
Testability
System Characterization
 Requirements
for each category are scored on a scale
of 1-3



1 = Low
2 = Moderate
3 = High
13
 Scores
for Risk and Criticality Categories are
separately averaged to determine a weighted value
 The
weighted value scores are then multiplied to
determine the IV&V Analysis Level (IAL) Threshold
 IAL’s
are broken down into 4 categories:
1≤CARA<2
2≤CARA<5
5≤CARA<8
8≤CARA≤12

Minimal:
 Limited (L):
 Focused (F):
 Comprehensive (C):
Probability
IMPACT
Low
Catastrophic Limited
Moderate
High
Comprehensive
Comprehensive
Focused
Comprehensive
Moderate Limited
Limited
Focused
Low Minimal
Limited
Limited
Critical Limited
14
Criticality
Category
Performance and
Operation
How would the failure of
this requirement affect the
Performance and
Operation of the
System/Application(s).
Safety
Can be the patient safety
or it could be the safety of
the system, for example:
Do no harm to VistA.
CRITICALITY CATEGORIES AND RATING CRITERIA
Catastrophic
Impact Value=4
•Failure could cause
loss of use of system
for extended time,
loss of capability to
perform all project
requirements.
•Failure is not easily
resolvable.
•Failure could result in
loss of life or cause
severe personal
injury.
•Failure could result in
severe harm to
system (or data)
integrity.
Critical
Impact Value=3
•Failures could cause
loss of critical function
not resulting in loss of
system/Application(s)
use, lengthy
maintenance
downtime, or loss of
multiple objectives.
•Failure is partially
resolvable.
Moderate
Impact Value=2
Low
Impact Value=1
•Failure could cause
loss of a single
application / objective
or reduction in
operational capability.
Failure is fully
resolvable.
•Failure could cause
inconvenience ( e.g.,
rerun of programs,
computer reset,
manual intervention).
•Failure could result in
non disabling
personal injury,
•Failure could result in
•No safety
serious occupational
minor physical or
implications.
illness, or loss of
mental harm.
emergency
procedures.
•Failure results in
significant schedule
•Failure could result in delay.
•Failure could result in
large cost and
•Alternate means to
cost overruns large
schedule overruns.
implement function
If a defect was found in
enough to result in
•Alternate means to
are available but at
validating the requirement, unachievable
implement function
reduced operational
how much time/cost would
operational capability.
it take to fix it successfully.
are not available.
capability.
This includes developer,
•Full operational
SQA, IV&V, docs, etc.
capability delayed.
Cost of
Failure/Impact to
Schedule
•Failure results in
minor impact to cost
and schedule.
•Problems are easily
corrected with
insignificant impact to
cost and schedule.
15
Risk
Category
Complexity
Complexity of this
requirement.
Maturity of Technology
This scoring should be
based on how good and
stable the technology or
product is within and
outside of VA. If within the
VA how well has it proven to
work with VA systems.
Requirements
Definition & Stability
Is the Requirement likely to
change?
Testability
System
Characterization
RISK CATEGORIES AND RATING CRITERIA
High
Driver Value = 3
Moderate
Driver Value = 2
Low
Driver Value = 1
• Highly complex control/logic
operations
• Unique devices/complex
interfaces
• New/unproven algorithms,
languages & support
environments
• Moderately complex control/logic
• May be device dependent
• Simple control/logic
• Not device dependent
• Proven on other systems with
different application
• Proven on other systems with same
application
• Rapidly changing, baselines
not established
• Many organizations required
to define requirements
• Much integration required
• Potential for some changes
• Some integration required
• Solid requirements - little potential
for change
• Little to no integration required
• Difficult to test
• Requires much data
analysis to determine
acceptability of results
• Large number of systems
• Many components
• Transmitting messages
contain highly sensitive data
• Large volume of messages
transmitted
• Requires some test data analysis to
determine acceptability of results
• Acceptability of test results easily
determined
• Medium number of systems
• Medium number of components
• Transmitting messages contain
critical data
• Medium volume of messages
transmitted
• Few systems
• Few components
• Transmitting messages contain low
critical data
• Small volume of messages
transmitted
16
 Domain
architecture knowledge
 System Analysts
 System
Integration and Performance Engineers
 Knowledge
 System
with critical thinking skill
of Core Business components
Engineers, DBA, System Architects
 Moderator
17
1) Evaluation Score must be slotted in a column. Scoring is not accepted if it is not a
bullet within the column.
2) A maximum of 3 minutes for discussion per requirement.
3) In situations of disagreement where the variance is equal to 1, final scoring will be
scored as the more conservative score. (e.g. if scores of “2” and “3” are in debate,
the more conservative score of “3” is recorded).
4) When extreme scoring exist (e.g. “1” & “4”) both scores must have proper slotting
positioning (rule #1). The analyst with the higher score must explain the
justification. Re-vote immediately after the explanation.
5) Group “like requirements” together. Grouping speeds up the analysis.
6) Do not repeat scoring callouts. If your value is called, state, “Agreed.” If not, call
out “Object.” The moderator will ask for your score and slotting. Your explanation
will follow for your slotting. Others may agree and change their score. This
technique is critical in remote sessions.
7) Silence is acceptance. It is better to “Agree” for consensus.
18
 Requirements
are extracted and logically grouped
from the following required documents:
 Requirements Specification Document (RSD)
 Software/System Design Document (SDD)
 Other
artifacts used for reference
 Testing Intake Assessment (TIA)
o
Identify developer/PM experience with the technology
 Concept of Operations (ConOps)
o
o
Identify architecture
Overall concept of project
19
 List
Requirements
 Captures
scoring methodology
TS - CARA Worksheet - NwHIN
Criticality
Criticality & Risk Areas:
Performance and Operation
Safety
Cost of Failure/Impact to Schedule
CTL + o 'Puts "1" into blank columns for 1 or more rows.'
Catastrophic Impact Value = 4
Catastrophic Impact Value = 4
Catastrophic Impact Value = 4
CTL + h 'Move to the 'Performance and Operation' column of the current row.'
Critical Impact Value = 3
Critical Impact Value = 3
Critical Impact Value = 3
CTL + R 'Copies a row if 'Performance ...' has a value. If 'Performance ...' is blank, it will
paste the row copied into one or more rows.'
Moderate Impact Value = 2
Moderate Impact Value = 2
Moderate Impact Value = 2
Low Impact Value = 1
Low Impact Value = 1
Low Impact Value = 1
RSD 2.8 - Support addition of New Exchange Partners
BN 9.1.1 - Provide a new institution number for each new partner
BN9 .3 - Provide the ability to authorize a new partner
BN 9.3.1 - Provide confidence testing prior to authorization of a new partner
20
 Risk
Analysis and Test Scope Report

Understanding of project application
 Key Observations and Findings re: TIA inputs
 Risk Analysis Summary (see Table 1)
 Testing Requirements and Duration per Service
Table 1:for
Requirements
with Notable Risks
recommendations
testing
Req #
Requirement Description
Risk
ABC Requirements
Table 2: Risk-Based Testing Requirements
Feature/ Function/ Other
Requirements
RSD, Sec III, Para 1.20
Service
Estimated Effort
Requirements Verification 10 working days
Required Documentation
Updated SDD, Test Cases
21
 Documentation



Incomplete
Agile
Identifying Increment Scope
 Metrics
– Is the process working??
 Following


Up
Obtaining Workload Forecast Dates
Resource Availability
22

The CARA Analysis serves the following critical risk
mitigation functions:






Helps identify potential patient safety issues.
Helps assure reliable system operation.
Assists in diagnosis of problems of critical functionality.
Coding to Requirements.
Quality Assurance to the development process.
Based on our success, the DoD/VA Interagency Program
Office (IPO) has adopted CARA for all Integrated Electronic
Health Record (iEHR) projects.
23
24
If you have any questions, please contact me.
Paul Shovlin, Checkpoint Technologies


[email protected]
813-818-8324
25

similar documents