By Alexandra Ellinson and Jade Maloney
The word ‘theory’ is often bandied about by evaluators. But they’re not all talking about the same thing. And some authors on evaluation don’t even think theory is necessary for evaluation.
Like our Senior Manager and former evaluation lecturer, Brad Astbury, we think theory is useful and can be used in evaluation in multiple ways.
Donaldson and Lipsey in The Handbook of Evaluation identify three broad types of theory in evaluation.
While it’s helpful to clarify these different uses of the term ‘theory’, at ARTD we tend to prefer the term ‘approaches’ when referring to descriptive accounts of what evaluation is or normative accounts of what it should be. This is because these ideas (e.g. participatory or utilisation-focused evaluation) are not providing causal explanations—something that we take to be an essential feature of a theory—but rather are either positing assumptions about the ontology of evaluation (what it is), its epistemology (how we can be confident that evaluative claims are accurate), and/or are stating principles for good evaluation practice. Even theory-based approaches to evaluation are not themselves evaluation theories, but rather a certain way of doing evaluations, one that commences with a set of (hopefully evidence-based) assumptions about the nature of the thing being evaluated and how the intervention is expected to cause outcomes.
Terminology aside, it is very useful to articulate one’s approach to evaluation, not only to ensure that it is consistent and coherent, but also to build a shared understanding about the approach with a client or stakeholders (especially if their close engagement in the evaluation is required). It is also part of fostering a community of professional practice.
We also think that evaluations can and should draw more on social science theories. While it is common for evaluations to involve a review of literature related to the content area of the program, there is not always a systematic approach to the application of existing knowledge in the field to evaluation.
Increasingly, we are looking at how social science theory can feed into program theory. We examine the evidence about programs that are built on similar bodies of psychological or social research. This helps us assess the program’s evidence base, identify adjustments and helps us focus our evaluative effort on the gaps in the evidence base.
We also consider ‘negative program theory’ to identify how an intervention could potentially possibly result in an opposite or negative outcome. And draw on the research to manage expectations about the timeframe in which outcomes may be observed.
So what does this look like in practice?
In an evaluation of a peer support program for students with disability and mental health issues, we drew on the evidence base about how peer support models work to empower participants. We also considered how the program could result in social exclusion and increased anxiety rather than social inclusion, and identified how this risk would be managed in the program design.
In an evaluation of an intervention designed to reduce antisocial behaviour, we drew on criminological literature about deterrence effects as well as emerging psychological evidence about what makes people more responsive to regulation. By doing this, we could explain the pattern of outcomes and make recommendations about how to better target the policy to people for whom deterrence is most likely to be effective, while minimising potentially negative unintended consequences.
In an evaluation of an intervention to support victims of crime, we combined social science and realist theory. The diagram at the top of the page provides a simplified overview.
We are keen to hear from other evaluators about how they use 'theory' and reassured by the recent turnout to Brad Astbury's AES on theories of evaluation.