Assessment Tomorrow (ppt)

Report
Assessment Tomorrow
Robert Coe
@ProfCoe
Centre for Evaluation and Monitoring (CEM)
Durham University
Assessment Tomorrow Conference
Edinburgh, 22nd November 2012
1
Why are we here?
CEM aims to
 Create the best assessments in the world
 Empower teachers with information for
self-evaluation
 Promote evidence-based practices and
policies, based on scientific evaluation
To help educators improve educational
outcomes measurably
2
CEM activity
 The largest educational research unit in a
UK university
 1.1 million assessments are taken each
year
 More than 50% of UK secondary schools
use one or more CEM system
 CEM systems used in over 50 countries
 Largest provider of computerised adaptive
tests outside US
3
Outline
 Assessment is the most powerful lever we
have
 Quality matters
 Technology can make assessment
o
o
o
o
4
Efficient
Diagnostic
Embedded
Fun
o
o
o
o
Valid
Standardised
Secure
Informative
Good Assessment
 Makes learning visible
 Makes us focus on learning
 Allows us to evaluate
o
o
o
What students do and don’t know
Against appropriate norms
Effectiveness of teaching
 Allows us to diagnose
o
5
Specific learning needs
EEF Toolkit
Promising
10
May be
worth it
Effect Size (months gain)
Feedback
Meta-cognitive
Pre-school
Peer tutoring
1-1 tutoring
Homework
0
£0
Summer
schools
Parental
AfL
involvement
Learning Individualised
Sports
learning
styles
Arts
Performance
Ability grouping
pay
ICT
Smaller
classes
After
school
Not
worth it
£1000
Cost per pupil
6
http://www.educationendowmentfoundation.org.uk/toolkit
Teaching
assistants
Definition of a grade
‘An inadequate report of an
inaccurate judgment by a biased
and variable judge of the extent to
which a student has attained an
undefined level of mastery of an
unknown proportion of an indefinite
material.’
Dressell (1983)
7
Would you let this test into your
classroom?
Does the test discriminate adequately between different levels of
performance?
How long does How well do the test
How clearly defined the test (or each scores predict later
are the acceptable element of it) take performance?
Do
repeated
interpretations and each student?
administrations of
of test scores?
Douses
the responses
have to be marked?
the
test
give
How much time is needed for this?
consistent results?
What does the test How well do the test
Do the test items
scores
correlate
with
claim to measure?
other measures of the look appropriate?
How well does the measure correspond
same thing?Do test scores reflect factors
with measures of the same and related
other
than
the
intended
constructs, using the same and other
construct (such as gender,
methods of assessment?
social class, race/ethinicity)?
8
Computer Adaptive Testing
 Right answers  harder questions
Wrong answers  easier questions
 Can give same information in half the time
 More accurate at the extremes
 More pleasant
testing experience
 Need access to
computers
 Development costs
higher
9
PIPS Baseline : start of school
10
InCAS: diagnostic assessment
through primary school
11
Computer Adaptive Baseline Test
12
In the future, technology allows
 Teachers to author, share and evaluate test
items
 ‘Home-made’ tests with standardised norms
 Adaptive presentation
 Automatic marking of complex responses
 Platforms for efficient and quality-controlled
human judgement (marking)
 Cheat detection
 Sophisticated feedback to students and
teachers
13

similar documents