By Jade Maloney
For evaluation to live up to its potential to improve outcomes and ensure effective targeting of limited resources, evaluation communications must cut through.
The first step is to know your audience or audiences (as you will often have more than one). Knowing audience will inform the engagement channels you use, how you frame your message and the language you use, among other things.
There’s a large base of literature on the demand and supply side factors that affect evaluation use – that is, the decision setting and user characteristics on the one hand and evaluators’ approach on the other. (For more, see my research on evaluation use in the Australian context). The demand-side factors affecting evaluation use include the characteristics of evaluation users, their commitment or receptiveness to evaluation, and the broad characteristics of the context in which the evaluation is being conducted, particularly the political climate, social climate, language and culture, values and interests, as well as the interactions between these.
While you can’t do much about the political climate, you can get to know your evaluation audiences. This will help you to deliver content that is of value to them, in ways that they can take on board. It can also help you to overcome lack of commitment to or fear of evaluation.
Think like a market researcher. For each of your audiences, ask yourself these questions:
The questions above will give you a working profile of each of your audiences. You might like to keep track of this information in a table or matrix.
By now you may be asking yourself, ‘How could I possibly meet the needs of all of these audiences in one report?’ The answer might be that you can’t. You might need different communications tools for different audiences, such as video summaries, findings brochures, as well as slide decks and full reports.
But, in some cases, you will be able to meet the needs of various audiences through layering information in a single communication. At it’s most basic, this means starting with a one-pager of key findings, followed, by an executive summary, and a full report, with technical detail relegated to appendices. At the next level, it means using different modes of communication, recognising that the stats tables will reach some, while the stories will reach others. You can also layer in face-to-face findings discussions by providing visuals and written documents to complement your auditory communication, and using activities to enable people to engage with the implications. Stay tuned for more on structuring reports and telling evaluation stories.
The next article in our Communication for Evaluation series will cover evaluation as process rather than product.
 Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364; Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B. (2006). Research on evaluation use: a review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410; Vo, A.T. & Christie, C.A. (2015). Advancing research on evaluation through the study of context. In P.R. Brandon (Ed.), Research on Evaluation. New Directions for Evaluation, 148, 43–55.