By Jade Maloney
You might have heard that we are living in a VUCA world—volatile, uncertain, complex and ambiguous.
While social policy interventions and evaluation ‘grew up in the projects’, as evaluation theorist Michael Quinn Patton has noted, there is an increasing recognition of the need to think in systems. There’s also the need for innovation and closer collaboration between design and evaluation (evidenced in last week’s Design and Evaluation converge hosted by the Australian Evaluation Society and Clear Horizon in Melbourne).
Early Saturday morning, when we’d normally be hitting the snooze button before grabbing breakfast, we headed to Social Design Sydney’s 'How might we...support systemic social change?' workshop to consider what change makers have learned to date and what’s needed to strengthen our impact.
Dr Andrea Siodmok from the UK Policy Lab highlighted the problem with the metaphor of ‘turning a ship’ for creating change. We – or maybe it’s just me, with my lack of technical understanding of ships, borne of my propensity for sea sickness – think this means change requires a major exertion of energy. But in reality, it is the trim tab on the rudder that creates a break in the water that enables the ship to turn. Our equivalent in the world of design and evaluation is, how might we identify smaller interventions that enable greater change in systems?
Another possibility Dr Siodmok identified is using change to create change; identifying liminal moments, when the system is in a state of transition, to introduce change. This brings to mind the literature on policy windows. Timing is clearly important, as all change is about people and involves politics.
Both she and her colleague, Sanjan Sabherwal, identified the need to work with policy teams as supporters. They also identified the value of bringing people with diverse perspectives into a room to create a more holistic and shared understanding of systems, break silos, and agree on actions. This requires intentionality in designing the conversation—for example, not having people sit around a boardroom table when making connections and identifying possibilities. This is something we have also found valuable in our work supporting the design and evaluation of initiatives to increase inclusion and reduce stigma and discrimination.
The “lightning round” of presentations from local change makers built on and extended these insights.
With these provocations, we identified four topics for open space conversations. Because I’ve noticed the tension between calls for innovative and evidence-based initiatives—with some grants asking applicants to demonstrate both—I joined the group discussing how we might invest more courageously in early stage innovation. Our group identified a need to:
There were different views about whether government should require all funded initiatives to report on learnings now. From my experience in evaluation, and my reading of the evaluation use literature, evaluation anxiety is a common issue. Individual and organisational receptiveness are key to enabling evaluation use. If we don’t work on shifting the culture as well as the structures around sharing learnings, we could get less than full disclosure in public reporting or, worse, shrinking terms of reference for evaluations. Of course, the relationships between culture and structure and systems are complex and intertwined. But given the evidence is that “failure” can be a step to success in innovation, we need to shift the way we conceptualise failure in this context.
I am looking forward to hearing more from these change makers as they shift systems. If you think the link between design and evaluation is key to shaping the future of evaluation, come along to our open space session at #aes19SYD in September.