Evaluation for the public good or as a public good?

Aricle Image for Evaluation for the public good or as a public good?

November 2017

By Consulant, Ken Fullerton, and Director, Jade Maloney

We’ve been thinking a lot about how evaluation can support the public good since Sandra Mathison kick-started the Australasian Evaluation Society (AES) International Evaluation Conference in Canberra by telling us that evaluation is not delivering on this promise because it is constrained by dominant ideologies, is a service for those with resources, and works in closed systems that tend to maintain the status quo. The Presidential Strand of the American Evaluation Association Conference helped us to identify some ways that evaluation can support public good.

If evaluation is built in upfront and asks the right questions (not only how is this program working, but how does it compare to alternatives?), it has the potential to support the public good. It can be used to identify improvements to a program’s structure and implementation that support better outcomes, inform decision-making about whether a program should be expanded to benefit new communities or be discontinued, so that resources can be allocated to other public programs that are achieving a greater impact.

For example, an evaluation of an air pollution reduction initiative might inform adjustments in program delivery that result in better outcomes from which everyone stands to benefit. According to the World Health Organization (WHO) “Outdoor air pollution is a major environmental health problem affecting everyone in developed and developing countries alike” and reductions in air pollution “can reduce the burden of disease from stroke, heart disease, lung cancer, and both chronic and acute respiratory diseases, including asthma.” This could, in turn, have other positive flow-on effects, such as reallocation of expenditure savings to other beneficial programs.

However, evaluation can only support the public good if it is useful and used. A recent study by Maloney, entitled Evaluation: what’s the use? (Evaluation Journal of Australia, in press), indicates that AES members perceive non-use of evaluations as a significant problem in the region. This finding is consistent with the broader literature from North America and Europe, which suggests that many evaluation reports are sitting on shelves gathering dust instead of being used for (public) good.

Then there’s the question of whether evaluation can be considered a public good in and of itself. (The AES Conference debate on whether we should think of evaluation in terms of capital didn’t settle this for us, as amusing as the comparisons between evaluations and washing machines were).

To get technical, a public good is one that is both non-excludable and non-rivalrous. This means that no individual can be excluded from using that good and use by one individual does not reduce the availability of the good to others. Fresh air and street lighting are common examples of public goods.

If an evaluation report identifies broad learnings about supporting a particular target group or addressing a certain policy problem, it can be used by multiple organisations. And one organisation using an evaluation report does not prevent another organisation from also using the insights to inform their work.  

The hitch comes on the ‘non-excludable’ criteria. Commissioning organisations often don’t publicly release evaluation reports, which limits the capacity of other organisations to benefit from the insights gained into what works and how and, thus, the potential of evaluation to be a ‘public good’.  Evaluators interviewed by Maloney identified the lack of sharing of evaluation findings as a barrier to the broader use of evaluation.

In recent years, across Australia, there has been a trend among government agencies to release more evaluation reports to the public. This increased transparency may enable evaluation to be a public good as it means researchers can access a more fulsome range of evidence about program models in action and, in the case of realist evaluation, learn more about what works for whom in what circumstances and how. 

On the flipside, as identified by some evaluators in Maloney's research, there's a need to ensure that the push to publication doesn't affect willingness to have open discussions about things that have not worked as intended because this would limit the capacity of evaluation to support improvements for good. And, when reports are not published, government agencies and evaluators could consider what learnings can be shared through conferences and online discussions. In this way, we can all support evaluation to live up to its potential.