By Gerard Atkinson and Melanie Darvodelsky
We all like to share success stories, but the fact is that we can learn just as much – and sometimes more – from talking about when things don’t go as planned.
This was the subject of the Australasian Evaluation Society NSW event on Wednesday May 30, “Learning from evaluation failures”. The event was run by two experienced evaluators who each shared a previous case of evaluation “failure”, where the client had difficulty accepting the findings of the evaluation.
Case 1: The evaluator found out that the number of participants who transitioned from institutional care into a support program was zero. When the evaluator presented this finding at the final meeting, the client questioned its accuracy.
Case 2: The evaluator worked with the client from the beginning to identify and agree on which data would be used to measure outcomes, but as the project progressed, the client seemed to value their internal data over external sources. At the close of the project, the evaluator pointed to external data to say that the program objectives were not met. However, the client disagreed and used their internal data to hold to their view.
Evaluators at the session formed small groups to discuss “What could the evaluator have done to prevent or minimise this negative result?”
Gaining acceptance of and action on negative findings is tough. This is unsurprising given the evidence that people tend to accept information confirming their views and refute information that challenges their views.
The key issue identified from both cases was a need to bring people along on the evaluation journey. It appeared that in the first example, the evaluator operated alone, which may have exacerbated the negative reaction at the close of the project. In the second case, the evaluator and client did not stay on the same journey despite their initial agreement. Working in and maintaining partnership with stakeholders is an effective way to prepare them for and ease their acceptance of negative findings, as well as increase their sense of ownership for the project and the next steps needed to create change.
Evaluators identified a range of practical ways to work in partnership with these stakeholders that may have led to more positive project outcomes.
These strategies fit with the findings ARTD Partner, Jade Maloney’s research on evaluation use. However, Maloney’s research also identified that these strategies can fail when working with organisations that lack a learning culture and when findings are politically unpalatable.
The strategies also align with Michael Quinn Patton’s Utilisation-Focused Evaluation. Quinn Patton’s approach provides a framework for evaluators to maximise the intended use of evaluations by users, even where the results of an evaluation may not match what program staff or management expected.
Evaluators’ candour in telling their stories and allowing other evaluators to consider how we can collectively achieve greater use of evaluations is a positive contribution to evaluation practice. It builds on the growing conversations in the field, such as those seen at the AES 2017 conference in Canberra, and in Kylie Hutchinson’s recent book “Evaluation Failures”.
We’re keen to continue the conversation – this year’s AES Conference will be a great opportunity.
Hutchinson, K., “Evaluation Failures”, 2018
Patton, M., “Utilization-Focused Evaluation”, 4th Ed., 2008
Patton, M., “Utilization-Focused Evaluation (U-FE) Checklist”, 2013
Ramirez, R., and Brodhead, D., “Utilisation focused evaluation: a primer for evaluators”, 2013