Making realist evaluation work

Aricle Image for Making realist evaluation work

November 2017

By Andrew Hawkins, Sue Leahy, Fiona Christian, Melanie Darvodelsky

Do you want to know what happened or what works? What works on average or what works, for whom, under what circumstances, and how?

Realist evaluation moves beyond simplistic notions of programs as either working or not. It takes a more scientific approach – aiming to understand the mechanisms within programs that have casual power, and the situations in which these are active to generate (demi) regular outcomes. 

ARTD has a commitment to realist evaluation, recognising that it can help government agencies and non-government organisations to better design and target interventions and identify the mechanisms that are crucial in any further rollout of a program. This is why we sponsored and attended the 2017 International Conference for Realist Research, Evaluation and Synthesis held in Brisbane last week.

There were keynote speakers from the pioneers of realist evaluation – Nick Tilley and Ray Pawson – as well as presentations and workshops Australian realist thinkers Gill Westhorp and our own Director Andrew Hawkins. As conference attendees ranged from experts to novices, there was a strong focus on how a realist approach could be applied in the real world.

So, what were the key take-outs?

  • See a program as a theory and use program theory rather than programs as the unit of analysis in evaluation. Understand that theory testing is about theory adjudication – that is, deciding between plausible explanations. Operate at the level of big ideas. Better explore program and policy history. Avoid the formulaic and focus on intellectual craft (Ray Pawson). 
  • The purpose of evaluation is to be useful – ensure that evaluation can play a role in decision-making and improve your expertise for that purpose (Nick Tilley).
  • You don’t need to produce a realist evaluation report that is full of jargon to make use of realist principles to better explain, maximise and replicate program outcomes. You just need to think more deeply about how change occurs in social settings (Andrew Hawkins).
  • Don’t get lost in finding the context for a realist evaluation – you only need to understand what context interacts with the mechanisms to produce outcomes (Leslie Johnson).
  • One diagram often won’t be enough to represent context-mechanism-outcome configurations (Patricia Rogers).NVivo can be used for more than thematic analysis. It can be used for realist analysis by coding whole theories (Sonia Dalkin, Northumbria University).
  • It’s important that procurement processes enable adaptive, emergent evaluation (Penny Hawkins).RBA and Realism can be combined, as long as data collection is set up to capture the purpose of both approaches, and evaluators can be flexible in their work (Patrick Maher and Bronny Walsh).
  • Realist evaluation can align with best practice in working with Indigenous people and communities.

As realist theory can be heavy going and the jargon difficult for the uninitiated, we see a key challenge for realists as translating the concepts into the concrete and convincing evaluation funders that realism can help them in practical ways. After seeing the ‘lightbulb’ moments among conference participants, we’re confident that this can happen. Let’s continue the conversation between now and the next conference.