Naresh Singh
Director-General, Partnerships with
Canadians Branch, CIDA.
Most development interventions are not only complicated they are also complex.
Few are simple, such as digging wells in a community where the wells is all you
have to account for. Most humanitarian assistance actions might be complicated
but are not complex (they become increasingly complex as we move on the
spectrum to long term reconstruction and development.)
Building a bridge across a gorge is complicated, not complex; or sending a
rocket is complicated not complex but bringing up a child is complex! Complex
adaptive systems or CAS include social systems, economic systems, ecological
systems. – they are constantly evolving, changing, adapting. Complicated (noncomplex) systems include physical and service delivery systems.
Evaluation of change in complicated systems require a narrow range of well
defined questions and indicators capable of very precise quantitative measure,
typically at output levels; delivered very often through a project with a linear
logic model : Development results are measured at outcome levels. The
relationship between inputs and activities and outcomes is not tightly coupled
as between inputs and activities and outputs in projects. No wonder the
limitations of “projects” to achieve development results are increasingly
acknowledged, and alternative, approaches utilized such as: program, sector
wide approaches, budget support, etc.. Development interventions and expected
results seek complex system changes that need to be evaluated as such.
Patton 2010 describes the following characteristics of CAS:
Nonlinearity: Sensitivity to initial conditions; small actions can stimulate large reactions,
thus the butterfly wings (Gleick, 1987) and black swans (Taleb, 2007) metaphors, in which
highly improbable, unpredictable, and unexpected events have huge impacts.
Emergence: Patterns emerge from self-organization among interacting agents. What
emerges is beyond, outside of, and oblivious to any notion of shared intentionality. Each
agent or elements of pursues its own path but as paths intersect and the elements interact,
patterns or interaction emerge and the whole of the of the interactions becomes greater than
the separate parts.
Dynamical: Interactions within, between, and among subsystems and parts within systems
are volatile, turbulent, cascading rapidly and unpredictably.
Adaptive: Interacting elements and agents respond and adapt to each other so that what
emerges and evolves is a function of ongoing adaption among both interacting elements
and the responsive relationships interaction agents have with their environment.
Uncertainty: Under conditions of complexity, processes and outcomes are unpredictable,
uncontrollable, and unknowable in advance. Getting to Maybe (Westley et al., 2006) captures
the sense that interventions under conditions of complexity take place in a Maybe World.
Coevolutionary: As interacting and adaptive agents self-organize, ongoing connections
emerge that become coevolutionary as the agents evolve together (coevolve) within and as
part of the whole system, over time.
While small CSO project interventions and limited service delivery type
activities might be well defined and measured on the basis of simple linear logic
models, CSO program activities, of collective actions of several CSO’s (projects
or programs) is more likely to be better described by a complex adaptive system.
This is because we now have multiple actors, and multiple interventions
coupled in many ways with large degrees of freedom. Many CSO’s are social
innovators trying to bring about major social change by fighting poverty,
homelessness, community and family violence, HIV-AIDS, chronic diseases and
helping victims of natural disasters and wars.
According to Congers (2009) and Patton (2010) social entrepreneurs and
innovators have experienced evaluation methods that seem entirely unrelated to
the nature of their enterprise. “Identifying clear, specific, and measurable
outcomes at the very start of an innovative project, for example, may not only be
difficult, but counter-productive”. “Outcomes will emerge as we engage”, say
the social innovators.
“Not in my world” respond the funders and the evaluators. Our goals have to be
established before you engage. And you need an explicit change model, a logic
model to show how you will attain your goals.
One of the little understood, but most powerful and disruptive tensions in
established aid agencies lies in the clash between the compliance side of
programs and the technical program side. The essential balance between these
two tensions in development programs – accountability and control versus good
development practice – has now been skewed to such a degree in the U.S. aid
system (and in the World Bank as well) that the imbalance threatens program
Counter-bureaucracy has become infected with a very bad case of Obsessive
Measurement Disorder (OMD), an intellectual dysfunction rooted in the notion,
that counting everything in government programs (or private industry and
increasingly some foundations) will produce better policy choices and improved
Central principle of development theory – that those development programs
that are most precisely and easily measured are the least transformational, and
those programs that are most transformational are the least measurable.
(above taken from Nastios, 2010)
So what needs to be done differently?
- See attachments 1 and 2 from Patton (2010)
These principles can be seen clearly at work when one examines a matrix of 3 types of
development work and the four qualitatively different types of results CSO’s can
achieve in development. The types of development work are:
delivery of goods and services
Institutional building and capacity development, and
Policy dialogue and reform
The four different types of results are:
additive (tangible outputs where 1+1=2);
synergistic ( for example through demonstration effects where 1+1 is greater than
transformative ( for example through shifts in values, mobilisation, networking,
capacity building etc., societies get transformed, where 1+1 is far greater than 2);
harm ( when they do more harm than good e.g. eroding governments’ capacity to
deliver services by setting up parallel structures).
Government of Canada(2007) Evaluation
Policy: results, value for money, accountability,
compliance, all program spending to be
evaluated at least every 5 years, allows risk
taking for innovation, but with robust risk
Government-CSO Partnerships instruments:
Grants, Contributions, Contracts etc.
Tools used for program/ project design : Logic
models, RBM, PMF, as well as FRAU, FRET,
IMRT, PMRT etc, etc ….
CSO concerns: administrative burden, long wait
times for decisions, transparency of process, room
for innovation
Government concerns: value for money, tangible
results, predictability and sustainability of
outcomes, dependency, sense of entitlement
Redesign of Partnerships with Canadians Branch
to competitive calls process on-going
simplification of applications, decision times,
objective selection criteria etc.. And now new
approach to monitoring and evaluations ……
Based on culture of compliance and
accountability: evaluations tended to be
narrowly focused, requirement-oriented, and
operational in perspective.
Timeliness issues-results too late to affect
projects. At best influenced accountability and
funding decisions.
Limited sharing of results with partners or the
general public and little learning in spite of best
Funded from O and M.
Funding from ODA agreed , requires developing
country partners are primary beneficiaries and
more focused on development results
Greater attention to all dimensions of aid
effectiveness requires we understand the big
picture results, going beyond just project by
project results, hence initiative on aggregation
Brought together TBS, HRSDC, IDRC , CSO
partners, and various CIDA Branches to see how to
move forward developmental evaluations based
on CAS based theory of change
Increased formative (mid term) evaluations, give
money and directions to partners themselves with
standards from us
Theme and sector evaluations e.g water and
Country evaluations : All of CIDA, All donors, All
Whole of partner evaluations
Experiments with constituency feedback
Pilots on based on developmental evaluations
In spite of best intentions and good TOR’s evaluations
for accountability and learning tend to achieve a lot of
the former and very little of the latter. Data and mind
sets requirements are very different
Changes in rules, reporting guidelines, contribution
Staff training and partner capacity building workshops
Maybe the time has come when design, monitoring
and evaluation must speak to each other in a new and
different way for innovation and transformation to
flourish in international devlopment.

similar documents