News & Blog

Strengthening evaluation through a community of practice

Aricle Image for Strengthening evaluation through a community of practice

December 2018

By Jade Maloney

Evaluation can be a tough gig. While everyone talks evidence-based policy, no-one really wants to hear that the policy or program they designed or have been working hard to implement is not delivering what they’d hoped.

Kicking off the one-year anniversary of the Department of Finance, Services and Innovation’s (DFSI) Evaluation Community of Practice, Secretary Martin Hoffman recognised this challenge. To overcome it, he emphasised the need to build evaluation into organisational culture and systems – so it’s part of how agencies do business, not an add on.

As a member of the NSW Australian Evaluation Society Regional Committee and Co-convener of next year’s International Evaluation Conference in Sydney, I’m a big believer in the potential of communities of practice to strengthen the culture of evaluation – through sharing stories and building skills. This is critical given the history of evaluation reports gathering dust on shelves.

At the DFSI event on 6 December, participants worked on the rapid development of a program logic for a Service NSW initiative, and heard about how a recent evaluation had ended a policy that wasn’t achieving its objectives.

I presented with Emma Bedwin from NSW Fair Trading on the learnings from our evaluation of their Consumer Awareness Protection Initiative – a consumer engagement program with people with disability. Here’s what we found.

  1. A program can be more than a program – it can change systems – and logic models can capture this. Traditional logic models recognise external factors that can detract from or enhance a program’s outcomes. But if you think big, you can incorporate ways to transform systems within your program model.
  2. If you’re intentional, you can leverage a one-off evaluation to build monitoring and evaluation into systems. When Fair Trading asked us to evaluate the program, they asked for a transferable monitoring and evaluation framework that they could apply to other initiatives. This meant that when the evaluation ended, they had more than a report.
  3. External evaluators can complement internal capacity. With a focus on capacity building and refining data systems, core data collection can be done in-house, meaning external evaluators can focus on the data collection and analysis that requires specific skills or is better done by someone with some distance from the program.
  4. When you’re starting out, it helps to build on existing systems. They may not perfectly suit your purpose, but it can be easier to tweak something that is already there than to start everything from scratch.
  5. Pay attention to the situation analysis. We were informed by Michael Quinn Patton’s Utilisation Focused Evaluation – engaging end users in formulating the evaluation questions and design, and building methods into the program. But it takes time to ensure staff are comfortable with new outcomes data collection.
  6. Qualitative data can be powerful, but you need to persuade people of its potential. In the program, Fair Trading staff collected consumer stories through two-way conversations, which helped to identify issues to be addressed in the system. In the evaluation, we teamed observations of engagement sessions and interviews with all key stakeholder groups, and were able to triangulate findings across these sources to identify what worked well and what could be improved.
  7. Peers can strengthen evaluation. In our evaluation of the next iteration of Fair Trading engagement with people with disability, we are working with peer researchers – who can tell us what kinds of approaches and questions will work to seek feedback from program participants.

Not everything went according to plan – evaluations hardly ever do – but we learned a lot, produced information that has been used, and left behind a framework that is being transferred to other initiatives.

The DFSI Evaluation Community of Practice was a great opportunity to share these learnings and we’re looking forward to swapping more stories in future.


Growing the evidence base through evaluation

Aricle Image for Growing the evidence base through evaluation

December 2018

By Georgia Marett

At a recent Australian Research Alliance for Children and Youth (ARACY) event there was much discussion about growing the evidence base for a sector that encompasses many diverse and complex programs. Government departments and other organisations are increasingly attempting to consolidate evidence for public policies and programs to make it easier to understand their effectiveness and compare them to other options.

Evaluation is key to designing and delivering evidence-based policy and programs. As Brigid van Wanrooy of the Department of Health and Human Services (DHHS) mentioned in her address, in the absence of quality evaluation, it is difficult to determine whether an intervention was well designed, implemented successfully and achieved its intended impact.

Evaluating Programs, the OPEN Way

The push to evaluate often brings up two questions for the program being evaluated. First, how should an evaluation be conducted (the approach and method of collecting and analysing data)? And what will be done with the results (whether and where they will be distributed)?

DHHS is aiming to consolidate and distribute evidence about programs with their Outcomes, Practice and Evidence Network (OPEN). This network will promote research and data collection as well as build a list of programs that have had positive outcomes. The evidence for each program included in the network will be rated from low to high according to a hierarchy of evidence that places Randomised Controlled Trials (RCTs) or a synthesis of multiple RCTs at the top of the hierarchy, followed by quasi-experimental designs, cohort studies, case-control studies, cross-sectional surveys and case reports.

This approach is similar to the What Works Network, which was launched in the UK in 2013 as a national approach to prioritising the use of evidence in policy decision-making. What Works now has ten ‘centres’ focused on different policy areas and over £200 billion in funding. What Works also runs an advice panel that provides free assistance to help civil servants test their policies and programs. In the last five years, the centres have produced more than 280 evidence reviews and run over 100 large-scale RCTs, which have been assisting to transform public services in the UK.[1] OPEN aims to replicate this model for the area of child, youth and family policy in Australia.

While experimental evaluations are certainly an important part of the evaluation toolkit, there is more to evaluation and evidence than RCTs. As Gary Banks of the Melbourne Institute of Applied Economic and Social Research recently pointed out, RCTs are not the only or ‘best’ way of measuring the effectiveness of a public policy or program. [2]  At different stages of program development and in different contexts, different methodological approaches will be more appropriate.

RCTs should not necessarily be held up as the ‘gold standard’ rather, as Patton argues, methodological appropriateness (having the right design for the nature and type of intervention, existing knowledge, available resources, the intended uses of the results, and other relevant factors) is critical if evaluation is to maximise its potential for addressing questions about what works, for whom, where, when, how and why. [3]

How can evaluations help to build the evidence base?

Evaluations provide an important function in supporting the development of evidence-based policies and programs and can be used alongside other forms of research. As discussed at the ARACY event, building monitoring and evaluation into a project or program from the outset is paramount. An intuition that a program works will not suffice to demonstrate that a program is effective and prove its worth to funders.

When engaging an evaluator, careful planning is needed to ensure the methodology is tailored, the right questions are asked, data collection procedures are ethical and feasible and that the program can make best use of evaluation learnings.

Evaluators can contribute to the broader evidence base about what works through incorporating the following principles into their evaluations.

  1. Scope—understand the current state of the evidence and where there are gaps that require further research.
  2. Plan—determine ways in which the evaluation could contribute to the evidence base.
  3. Translate—translate findings into actions to help programs improve and grow.
  4. Disseminate—in agreement with program managers, devise a strategy to distribute the information gathered and the insights uncovered in the planning stage. It is important for evaluators to feed into the evidence base by publishing their results and presenting at conferences where possible; as this supports accumulation of knowledge about effective social interventions.

Above all, the evaluation approach and method must be tailored to the nature and stage of program development and its context to ensure that it is fit-for-purpose.


[1] The What Works Network: Five Years On (2018) https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/677478/6.4154_What_works_report_Final.pdf

[2] Whatever happened to ‘evidence-based policymaking’? by Gary Banks (30/11/2018) https://www.themandarin.com.au/102083-whatever-happened-to-evidence-based-policymaking/

[3] Fools' gold: the widely touted methodological "gold standard" is neither golden nor a standard by Michael Quinn Patton (2014) https://www.betterevaluation.org/en/blog/fools_gold_widely_touted_methodological_gold_standard


Top tips for evaluating community development

Aricle Image for Top tips for evaluating community development

November 2018

By Ruby Leahy Gatfield and Jade Maloney

Monitoring and evaluation support effective community development. Being ‘evaluation-minded’ will help you to assess your progress, understand whether you’ve achieved what you set out to achieve, and continue to learn and improve.

However, from our experience working with community organisations, we know that the terminologies and technicalities of monitoring and evaluation can feel overwhelming. Community projects usually operate on small budgets with limited time and resources to conduct large-scale data collection (see our blog on the common challenges NGOs face when it comes to evaluation).

To help community organisations find a way into monitoring and evaluation that works for them, we included tips for developing a monitoring system, collecting data, designing questionnaires and conducting interviews, and approaching evaluation, in the Ability Links NSW Community Development Resource Package.

Top tips

The Package includes these seven principles, to help guide your thinking about monitoring and evaluation at each stage of the community development process.

  1. Build in monitoring and evaluation from the outset. Evaluation is not something that is done only at the end of a project, looking back. It can be more useful if done during implementation, so you can inform ongoing improvement. You also need to build in data collection from the start to assess outcomes at the end.
  2. Get the scale right. Scale your data collection to the scale of your initiative. For smaller projects, develop a simple monitoring system and only collect data that will help you understand how much was done, how well it was done, and if anyone is better off as a result. For larger scale projects, we suggest developing a more comprehensive monitoring system, including a program logic and outcomes matrix, which provide a framework for evaluation.
  3. Keep it simple and stage it. If you are starting a new monitoring and evaluation system, start small. Introduce new elements in stages, don’t try to do it all at once.
  4. Pilot before launch. Test data collection with partners and participants so you can ensure it is feasible and meaningful.
  5. Collect what is useful. Collect what information is useful to inform delivery of your project. If you are seeking additional funding, you will also need to think about the data that will help you make a case for your project.
  6. Keep an eye on the benefits and the burden. Capture feedback from participants but minimise the data collection burden. Remember your participants are likely asked to complete surveys from multiple organisations.
  7. Manage the process ethically. This includes gaining informed consent and managing data securely.

For more guidance on how to start, manage and sustain a community development project to support inclusion of people with disability, including how to work effectively with specific population groups and organisations types and how to monitor and evaluate community development, download the Package here. You can also read our previous blogs on the Package.


How do you build trust in evidence in an era of public scepticism?

Aricle Image for How do you build trust in evidence in an era of public scepticism?

November 2018

By Stephanie Quail

The worrying decline in trust of public institutions – in Australia and across other western nations - has been a topic of much recent discussion. In a time where evidence and expertise are treated with increasing public scepticism, what then, is the role of evidence-based insights?

The 2018 AMSRS Social and Government Data, Evidence, Insights & Research Conference, held in Canberra on 1 November, saw a range of speakers from government, academic and business backgrounds come together to discuss this issue. It asked, how can government and industry leverage data and evidence to design and deliver more insightful and effective policies and programs? In this context, Dr Rebecca Huntley used her keynote address to call on those who provide evidence-based insights to public institutions, such as ARTD, to consider if the way we do our work increases or decreases public trust in institutions.

Harry Greenwell from BETA – the behavioural economics team within the Department of Premier and Cabinet – continued this theme by discussing the importance of transparency and rigor in the evaluation of experimental interventions. Having a background in experimental psychology and academic research, I was keen to see hear debates and how they apply to the world of evaluation. Greenwell highlighted the problem that, even with the same dataset, different analytical approaches can reveal sometimes strikingly different conclusions.[1] So, how do we ensure that we are being rigorous in our analytic techniques, and that our clients and the public can have faith in the insights extracted from data?

To improve the quality of insights and evaluation, Greenwell suggested following the principles of Open Science. These aim to make the process of data collection and analysis more transparent through:

  1. pre-registering the design and method for data collection
  2. submitting pre-analysis plans to clearly identify the questions of interest and analytical approach before researchers see the dataset
  3. making materials, data, and code freely and publicly available, wherever possible.

These suggestions were drawn from the Open Science movement. I was first introduced to Open Science as a PhD student in Psychology, where a number of well publicised failures to replicate high profile studies[2] left researchers wary of spurious results being published – due to either well-intentioned but statistically misguided analyses or, more nefariously, ‘p-hacking’ where datasets and analytical approaches are manipulated to ensure a result reaches the threshold of statistical significance expected for publication[3]. Open Science advocates acknowledge that reasonable and defensible methodological choices about analytical approaches can lead to significant variation in results, and that greater clarity around analysis can address these issues.

Greenwell explained that BETA tries to commit to the Open Science principles by pre-registering and publishing pre-analysis plans for the trials they run; however, they are not always able to make their data publicly available.

Increasingly, I’ve noticed the principles of Open Science and transparent data analysis become adopted in academic research and I’m interested to see if and how these approaches become adopted by evaluators. Although an Open Science approach is not appropriate for all evaluations, considering these principles when working with experimental or quasi-experimental data could be an exciting way to help evaluators be confident in extracting valid and reliable insights from data.


[1] See Silberzahn et al. (2018). Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results- used by Greenwell to illustrate of the impact of analytical choice and methods on experimental conclusions. 

[2] Amy Cuddy’s research on ‘power poses’ became well known after a viral TED talk. This NYTimes article reports on the impact of the well-publicised failures to replicate her original findings https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html

[3] See https://projects.fivethirtyeight.com/p-hacking/ for a demonstration of how to ‘p-hack’ a dataset


Using developmental evaluation to support inclusion

Aricle Image for Using developmental evaluation to support inclusion

November 2018

By Ruby Leahy Gatfield and Jade Maloney

Increasingly, we’re working with non-government organisations to develop and refine initiatives that that support people with disability and increase community inclusion.

At the start, it isn’t always clear what these initiatives will look like, how they’ll be delivered, or even what outcomes they’ll aim to achieve. As with most community development, they take a co-design approach to develop activities and outcomes iteratively, in response to the needs and interests of those involved. 

In this context, a traditional approach to evaluation—in which criteria for success, a logic or a theory of change are defined at the outset and a judgement against pre-determined criteria is made at the end—isn’t appropriate. So how can evaluation help? Enter developmental evaluation.

What is developmental evaluation?

Developmental evaluation, established by Michael Quinn Patton, can support the development of social innovations in changing and complex contexts – where there are interdependencies and no central control. Evaluators work with program teams to review data in real-time and reflect on what it means for them.

We’ve found the following questions support the ‘reflection-development’ loop:

  • What? What does the data tell us? What is changing? What is remaining the same? What are the cues there to emerging patterns?
  • So what? What is the value of what we are doing? What do these findings mean to us now and into the future? What effects are current changes likely to have on us, program participants and broader communities?
  • Now what? What does this mean for how we should act to optimise opportunities now? What are our options?[1]

To support the iterative process, developmental evaluators work in partnership with program teams, communicating regularly and being flexible about data collection methods to respond to emerging needs.    

What does it look like in practice?

ARTD is working with Dementia Australia and their Dementia Advisory Group on a developmental evaluation of its Dementia Friendly Communities program. The program, funded by the Department of Health, aims to support communities across Australia to become more dementia friendly.

It was clear from the start that there were different understandings about what makes a community ‘Dementia Friendly’ and what the priorities were for the program. So, we worked with Dementia Australia to source and understand input from people with dementia and their families, professionals and community members to inform the project design and activities. Three interrelated program components have been iteratively developed:

  • The Hub—an online resource centre for information about becoming dementia friendly, resources to empower people with dementia as advocates, and connecting people within and across locations.
  • The training program—face-to-face and online education session to inform individuals, businesses and organisations on being dementia friendly.
  • Community engagement program grants—funding for 21 organisations to undertake dementia friendly initiatives in their community.

The initial request was to provide six-monthly progress reports – drawing on user surveys and administrative data about the Hub. However, we found that this wasn’t frequent enough in the early stages of rollout, so we looked at data more regularly to understand patterns of uptake and interaction with the Hub, as well as drop off points to inform ongoing promotion and rollout.

The community engagement program attracted far more applications than originally expected, so our team became involved in refining the assessment process.

Talking to the teams, ARTD and Dementia Australia also realised that the stories coming out of the community engagement projects could be used not only for the evaluation, but to support broader public engagement with the program – by helping other communities understand what ‘Dementia Friendly’ might mean in practice. So, we updated our case study approach to include the production of videos that could capture stories of change over a 12-month period. This approach capitalised on the evaluation data collection process to support project implementation.

Our visits to five communities for the video production proved extremely valuable to project teams. They found that the reflective interview process prompted them to think critically about their goals, how they planned to meet these, what would be feasible and sustainable, and what data they should collect from the outset. During the visits, we also noticed that communities were facing some common challenges in designing and delivering their projects, and that others had found ways to overcome these. So, we hosted a webinar for all of the project teams to share their learnings and top tips, and will synthesise and circulate the findings to all.

What have we learned?

In a developmental evaluation, you become part of the team rather than an independent assessor. This doesn’t mean that you don’t offer a critical eye, but that you’re involved in working out how to apply your evaluation findings. Trusting relationships and open communication are crucial to joint critical reflection and the development of creative solutions.

While it’s important to have a plan, developmental evaluation requires flexibility – checking in regularly and evolving your data collection methods to focus on what matters most. This is a balancing act and involves regular communication and project rescoping to manage the overall budget.

Developmental evaluation can be more complicated to manage than a traditional evaluation approach, but it’s extremely rewarding to see evaluation used in real time to inform the ongoing development of an initiative so that it best meets the needs of those involved.

If you're involved in community development for inclusion, you might find these developmental evaluation resources by Better evaluation or our Community Development toolkit  useful. 


[1] Adapted from Gamble, 2008, A Developmental Evaluation Primer and McKegg and Wehipeihana, Developmental Evaluation: A Practitioner’s Introduction.


Assessing citizen engagement

Aricle Image for Assessing citizen engagement

October 2018

By Ruby Leahy Gatfield

We know that citizen engagement is at the heart of a strong democracy. It enables governments to deliver policies and programs that respond to citizens’ needs and helps to build trust in government systems and processes.

So, it’s important that we understand how well governments are conducting engagement activities. It’s not enough to know that more government agencies are engaging citizens on questions of policy and service delivery. We need to understand how effective their processes are, and the impact achieved.

In a recent discussion paper, developed in partnership with Amelia Loye, Director of Engage2, we draw on our experience of monitoring and evaluating initiatives across the social services to start the conversation about ways of assessing citizen engagement.

We found that while some agencies are inviting data and reporting on their processes well, it’s not always clear who was consulted, what they said, and how the findings informed the final program or policy. Transparent reporting on engagement activities means identifying the reach and demographics of people engaged, analysing the findings by cohort, and providing a line of sight between feedback and the final program or policy. Transparent reporting supports agencies to evaluate the impact of their work, ensures decision making is transparent and responsive, and builds public trust in government. It also contributes to Australia’s international commitment to more democratic and open government.

Monitoring and evaluation can improve the quality and impact of citizen engagement. Monitoring can help agencies track how well engagement is being done and support transparent and responsive processes. It can also help them refine and track their processes as they go to respond to emerging needs. Further, evaluation can help agencies demonstrate accountability, build the evidence base about how engagement works, and strengthen future engagement processes. In the discussion paper, we provide:

  • examples of positive engagement processes
  • introductory thinking about what a logic model for effective citizen engagement might look like
  • indicators for monitoring the process and impact
  • key considerations of the factors that may hinder or support quality evaluation.

If you’re keen to better understand the quality and impact of an engagement, you can read the discussion paper or contact Ruby at ruby.leahy.gatfield@artd.com.au


Top tips for inclusive community development

Aricle Image for Top tips for inclusive community development

October 2018

By Ruby Leahy Gatfield and Jade Maloney

To support access and inclusion, Ability Links NSW works with people with disability, their families and carers, as well as with organisations and other community members, on local community development projects. To celebrate the success of these projects and share their learnings about what works and what doesn’t, we worked with Linkers to develop the Ability Links NSW Community Development Resource Package.

The Package recognises that some people with disability face additional barriers to inclusion—including people from culturally and linguistically diverse backgrounds, people from regional and remote areas, Aboriginal and Torres Strait Islander people, young people, and people identifying as LGBTQIA+.

Barriers may be:

  • personal, including limited money, mobility, time or geography
  • informational, including limited knowledge about the subject or purpose of the engagement, and consultation fatigue
  • social and cultural
  • related to language and literacy.[1]

Top tips 

The Package provides guidance on how to take an inclusive approach to community development. It stresses that the involvement of people with lived experienced must be intrinsic to identifying, leading, designing, implementing, and evaluating projects. Projects should build on and value the strengths of those involved, and the cultures, skills and knowledge in local communities.

It also provides ‘top tips’ and best practice case studies for working with working effectively with people from specific population groups. While there are particular considerations for working with each group, some things are universally important.

  • Take the time to build trust and relationships.
  • Connect with local leaders and champions.
  • Understand the specific needs of the population group, but also recognise differences within communities.
  • Tailor communication channels and engagement methods.
  • Remember that not all people and communities will identify with the same experiences or have the same needs and interests.

For more guidance on how to start, manage and sustain a community development project to support inclusion of people with disability, including more detailed guidance on how to work effectively with specific population groups and organisations types and how to monitor and evaluate community development, download the Package here. You can also check out this video or read our last blog


[1] Capire Consulting Group. The Inclusive Community Engagement Toolkit. 2016. http://capire.com.au/#publications-section

Photo credit: NSW Department of Family and Community Services


Program design: what's the logic?

Aricle Image for Program design: what's the logic?

October 2018

By Partner, Andrew Hawkins

The Evidence Based Policy Summit in Canberra last week brought together staff from federal and state government departments to discuss strategies for strengthening data collection and using evidence to improve policy, programs and practice.

At the summit I ran a workshop on using evaluative thinking for program design. This is an approach that builds on the core concepts in program logic but is more explicit about the ‘logic’.

The pitch for Program Design Logic was:

  • A program, at its core, is simply a proposition that a certain course of action will lead to a certain set of outcomes.
  • A sound, evidence-based program will make sense ‘on paper’. That is, if we achieve each condition, then we have reason to conclude that, collectively, these will be sufficient for bringing about our immediate intended impact.
  • Program Design Logichelps identify flaws in a course of action at an early stage using pre-program evaluation.
  • As with many approaches to program logic, the components in our course of action are written as conditions or propositions in the form of a subject and a condition, for example, ‘the client is more aware’ or ‘the program is well-resourced’. Thus, it’s not the actions themselves that we should focus on, but the intended outcome of each action – the ends not the means.
  • An evidence-based program must also be evaluated ‘in reality’ – empirically evaluating the extent to which our actions actually lead to each condition (how and under what circumstances) can be achieved using a monitoring and evaluation framework, strategy and series of plans.
  • Program Design Logic is the foundational stage in a Design (or Define if it’s already in place), Implement, Monitor, Evaluate, Learn & Adapt process, using lean and agile concepts for continuous improvement.

It is now common practice to develop a ‘program logic’, but it is too easy to make a fanciful program logic – one that shows we develop the policy, we implement the policy and people change their behaviour. If we think critically about whether our activities will be sufficient (given our assumptions) and, in fact, necessary, we will develop more effective programs that are more efficient in their design.

In Program Design Logic, a program, intervention or course of action can be broken down into a series of steps or components that we think are necessary, and when all achieved should be sufficient to bring about a desired result or outcome.[1] As with every action in life, we will inevitably make some assumptions that we are relying on, but about which we are not 100% sure.

At an early stage, the focus for design and initial evaluation of a course of action – or in lean terminology, a ‘minimally viable product’ – is setting out the logic of the intervention as a ‘causal package’ of necessary and sufficient conditions. This is more like a recipe than a ‘causal chain’ in which one action or component is supposed to cause the next.[2]

To build an evidence-based ‘minimally viable product’ we need reasons to think our actions will deliver a causal package, that is, bring about certain conditions that together will be sufficient to bring about an intended outcome and contribute to higher-level outcomes.[3]

It is important that we maintain a realistic focus on what we can achieve, while at the same time always checking that we are in alignment with our long-term vision and ultimate intended outcomes. Technically, this means that in addition to our course of action being sufficient to bring about some outcome, we expect the course of action will contribute to some higher-level outcome: our vision. But it will do so in combination with the effects of numerous external factors over which we have very little influence.

In the workshop we worked to spot the flaws and unfounded assumptions in people’s programs for the purpose of developing feasible and cost-effective ways of re-designing programs to achieve intended outcomes. From multi-pronged approaches to increase the wearing of life jackets, to augmenting a regulatory approach for rehabilitation service providers with consumer education, we had a very productive time and the feedback was very positive.

Program Design Logic provides a tool for evidence-based policy and program design (and re-design) that is much simpler than many expect but requires plenty of thinking to do well. It is really just about having good reasons for each part of your program, being ruthlessly honest and maximising what you really must achieve—rather than accepting wishful thinking about what you hope to achieve.


[1] In technical terms, each condition is an insufficient but non-redundant (i.e. needed) part of an unnecessary (i.e. there are other ways), but sufficient condition (i.e. the intervention itself). That is each step is an INUS condition (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect), but the program is itself a sufficient condition.

[2] In taking this approach we are explicitly employing a configurationalist rather than successionist theory of causality that seems to be assumed in many linear program logics. It is argued that this approach is useful for program design at the macro level, but understanding micro-level causality may best be achieved using generative theories of causality or a realist approach.

[3] Reasons will include assumptions, warrants or theories – while theory of change is a crucial part of a good program, it is also just one type of evidence needed for an evidence based program and is subordinate to the overall logic of the course of action.


How can we harness systems thinking?

Aricle Image for How can we harness systems thinking?

October 2018

By Partner, Jade Maloney

Like co-design and nudging, systems thinking is certainly one of the buzz words of our time. At the recent Australian Evaluation Society Conference, Michael Quinn Patton challenged evaluators to move beyond programs to systems, think globally, track interconnections, crack silos and understand interdependence; while Social Design Sydney’s September event centred on how systems thinking can enable designers to identify innovative interventions to address complex social problems.

Clearly, if we’re designing or evaluating interventions to tackle complex issues like homelessness or family violence, it would help to think in terms of systems – to consider the way in which the behaviour of the elements within system and their effects are interdependent.

Patton has given evaluators developmental evaluation to use when they encounter interventions in complex systems. Developmental evaluators facilitate a continuous development loop of framing concepts, testing quick iterations, monitoring developments, and using data and discussion to surface issues. More recently, Patton has also given us principles-focused evaluation. 

But where do designers start with systems thinking? As Mieke van der Bijl-Brouwer outlined in her presentation at Social Design Sydney, there are so many systems theories, including system dynamics, system layers, complex adaptive systems, complexity theory, organisational systems, actor network theory, ecological systems theory, evolutionary theory, game theory, social systems theory, and theory U, integral model. One could be forgiven for struggling to find a foothold.

But van der Bijl-Brouwer and, co-presenter, Tim Tompson opened up some pathways.

Van der Bijl-Brouwer described systems thinking as the bringing together of analysis and synthesis. She then highlighted two means of thinking about intervening in systems. Firstly, Dave Snowden’s Cynefin framework, which identifies different approaches to use with simple, complicated, complex and chaotic systems. Secondly, the Iceberg Model along with Donella Meadows concept of leverage points, which identifies increasing leverage as you move towards addressing the mental models that underly structures, patterns of behaviours and events (the happenings we see on the surface). Finally, van der Bijl-Brouwer outlined several principles for systemic design:

  • opening up – unpacking the assumptions behind a brief, seeking varied perspectives, and developing a broader understanding of the problem to be addressed
  • evolving the framing – how we name and understand the problem to be addressed, drawing on a portfolio of interventions and prototypes
  • strengthening relationships – understanding end users and making use of social architecture.

Tompson told us that our tradition tools for describing the world are ill-fitted to our times. We need to move beyond dichotomies like ‘mind over body’ and ‘theory over practice’. But how? He suggested philosophical pragmatism, a recognition that everything is in the making, and use of theories like Actor Network Theory to understand how systems evolve and change. He used this kind of thinking in mapping two transport projects – recognising that while humans see ourselves as the masters of technology, we are also shaped by technology, and giving voice to the non-human actors within the network. He emphasised the value of drawing pictures to understand a problem, including our relation to it.

My key take-outs were the importance of recognising the type of system in which you’re operating, ensuring you open up the problem before you lock down an approach, and bringing together multiple perspectives through well-thought out workshops when designing interventions for complex social problems.

A recent AES regional network also tackled the topic of systems thinking in evaluation. You can read more about our take-aways from Michael Reid’s presentation here.


Transforming evaluation: what we’re taking from #aes18LST

Aricle Image for Transforming evaluation: what we’re taking from #aes18LST

September 2018

By the ARTD Team

This year’s Australasian Evaluation Society (AES) international conference challenged us to transform evaluation practice to address complex social and environmental issues in a changing world and to ensure cultural safety and respect in our work with Indigenous communities.

The crowd was lively – with a lot of newcomers to the AES (though just how many was a topic for debate – did the online polling really capture a representative sample of conference goers?), and Launceston provided a lovely backdrop for considered conversations about professionalisation, innovation, advocacy and the responsibilities of evaluators.

Our team enjoyed learning from peers and presenting on a range of subjects – including codesign, participatory approaches, strengthening program impacts on systems and building evaluation systems, evolving the evaluation deliverable, leveraging public datasets, and campaign evaluation.

Here’s what our team is taking from #aes18LST.

Sue Leahy, Managing Director

Penny Hagen’s keynote and workshop highlighted really different ways of working with community to achieve social change, bringing together shared skillsets of design and evaluation. We need to change the way we think about resourcing this work, taking long-term approaches – with evaluators involved at the community level as partners and critical friends or mentors. Sharon Gollan and Kathleen Stacey’s keynote also provided an important and clear call to action to name practices that exclude Aboriginal people from having voice in how evaluation is done.

Andrew Hawkins, Partner      

My highlight was meeting up with lots of old friends for our annual catch-up on new developments in the field. It was also great to hear Gill Westhorp – a giant brain and contributor to evaluation – was inducted into the hall of fame as an AES fellow. It should make all of us in the evaluation society proud!

Jade Maloney, Partner

I enjoyed being challenged by Michael Quinn Patton to transform evaluation. I've been using his principles-focused evaluation and drawing on systems thinking, and am now working through how to integrate this thinking into daily practice. I also took Sharon Gollan & Kathleen Stacey's call to action on ensuring cultural safety and respect in evaluation to heart. And it was great to engage in conversations about an advocacy strategy and pathways to professionalisation. These are important conversations for ther thefuture of evaluation and the AES, so I hope the conference continues to provide a forum for engagement .

Ken Fullerton, Consultant

Attending my first AES conference was an inspiring experience and opened my eyes to new approaches and innovative evaluators. My favourite speakers included Michael Quinn Patton, who provided the opening keynote and spoke of the butterfly as a real-world example of a Transformation and an example for evaluators to aspire towards in their professional work. Anne Markiewicz’s interactive session on Ethical dilemmas in evaluation practice challenged me to think of the role of evaluators – when and the extent to which an evaluator can or should step in and act when they believe something to be unethical.

Kerry Hart, Senior Consultant

I loved the Ignite sessions – succinct summaries of good work and lessons learnt. I enjoyed hearing from colleagues working in domestic violence and mental health, and those working with peer researchers. I’m also going to check out techniques like chatterbox and using photos and drawings to feedback transcripts to people who have literacy issues.

David Wakelin, Senior Consultant

Gerard Atkinson reminded us that leveraging available open data will be an increasingly powerful tool for evaluators. Kristy Hornby’s views on machine learning in evaluation resonated with my own studies: machine learning has a role to play, but before we jump in we must consider the implications of letting predictive modelling algorithms make decisions that have an impact on people who may be vulnerable. It’s also important that we monitor the output of the algorithms to ensure they meet the ethical responsibility we have as evaluators. And we can never afford to forget the people’s experience in and of the programs we are evaluating, even if it’s easy to get lost in quantitative data from time to time.

The conference also highlighted that truth is integral to data visualisation. Making visualisations easy to understand and immediately actionable is extremely beneficial, especially for real-time monitoring feedback. Jenny Riley’s session on Outcomes, dashboards and cupcakes was a nice example: https://aes18.sched.com/event/Er90/outcomes-dashboards-and-cupcakes.

Rachel, Senior Consultant

The conference was a great opportunity to listen – to the ideas colleagues and clients are exploring and what they’re struggling with and get the lay of the land.

Gill Westhorp introduced realist axiology – the philosophy of value and valuation, and what she believes the next development in realist evaluation needs to be based on. She laid out her grounding perspectives on defining value in realist ontology as her next area of focus.

Gerard, Manager

My highlight was seeing Nan Wehipeihana, Judy Oakden, Julian King and Kate McKegg’s presentation on Evaluative Rubrics. The seminar room was so packed that people crowded out the door. Those who made it into the session got a clear and usable introduction to using a rubric-based approach to evaluation, followed by a lively and honest discussion of implementing these in practice. I suspect from the interest and positive response we’ll be seeing more of this approach in future.

Jasper, Senior Consultant

The highlight for me was learning about Feminist Participatory Action Research from Tracy McDiarmid and Alejandra Pineda from the International Women's Development Agency. I liked how it explored methods focused on empowering participants in evaluation and balancing power dynamics, including through role play.

AES 2019

We’re all looking forward to next year’s conference in Sydney, exploring the theme Evaluation: un-boxed. We’ll be cracking open evaluation to end users and opening up evaluators to what can be learned from the communities they work with.


AES18 Day 1: How can we transform evaluation?

Aricle Image for AES18 Day 1: How can we transform evaluation?

September 2018

By David Wakelin

What does the future hold for evaluation and how can we scale up the value of our field to deliver global results? Does evaluation need to transform to evaluate transformations?

With the AES 2018 International Conference (AES18) kicking off this morning in Launceston, Michael Quinn Patton’s keynote set the tone for an engaging three days and challenged the audience to think about and discuss what a transformation of evaluation might look like.

In his presentation “Getting Real About Transformational Change: The Blue Marble Evaluation Perspective”, Patton gave us an insightful overview of the distinctions between different theories of change and the potential for a Theory of Transformation to provide a framework for understanding what influences systemic, large-scale transformations.

What are the challenges for transformation?

The results of evaluation today don't seem to jump off the page, showing transformations that have occurred. But are the results of programs so fantastic in every case that we have an eye-opening experience and the world around us has transformed? Maybe not. Instead, evaluation can present us with incremental progress on a small-scale that can be easily measured.

Further, transformations may be difficult to measure using current evaluation approaches. However, Patton noted that it should be easy to recognise transformation by how seismic and sustainable the transition is from what came before.

Patton also noted that adaptability and sustainability discussions in the years gone by have left us in need of more dramatic changes to achieve the results for our wonderful planet to continue in a manner that will provide for people today and generations to come.

Big data to the rescue?

Leveraging data on a grand scale can help us deliver a transformation where the whole is greater than the sum of the contributing parts. In talks this morning, we were exposed to some of the opportunities that can be generated from open data and big data sets and how to use these as evidence to inform decisions that affect millions of people.

The amount of data at our fingertips is enormous and ARTD Manager, Gerard Atkinson, highlighted the ease at which it can be collected from online resources. This came with the invaluable reminder that we have moral and ethical responsibilities to use this data legitimately and to consider our actions when using it as part of our work in evaluation.

When using big data, we can’t afford to lose the voices of people who are not often heard, but should to be involved in decisions about programs and services that affect their lives. Even when leveraging big data and predictive modelling, talking to people is a beneficial step in delivering the right results.

What’s next?

There is an opportunity to plant seeds here at AES18, to begin the expansion of evaluation from its current form to something that tackles systems transformations.

Tomorrow, we are looking forward to attending Penny Hagen’s Keynote address, Evaluative Rubrics with Kate McKegg and Nan Wehipeihana; and How algorithms shape our lives by Kristy Hornby. More on this tomorrow.


Want to make your community more inclusive? Learn from Ability Links.

Aricle Image for Want to make your community more inclusive? Learn from Ability Links.

September 2018

By Ruby Leahy Gatfield and Jade Maloney

We know that about one in five Australians have a disability. While most people with disability are no longer ‘shut in’—hidden away in large institutions—many are still ‘shut out’ of buildings, homes, schools, employment, businesses, sports and community groups.[1] People with disability can face barriers to being included in their communities, gaining employment, and accessing information and services. Some people with disability also face multiple and compounding barriers to inclusion, including people from culturally and linguistically diverse (CALD) backgrounds, people from regional and remote areas, Aboriginal and Torres Strait Islander people, young people, and people identifying as LGBTQIA+.

Over the past five years, non-government organisations providing Ability Links NSW have worked with people with disability, their families and carers, as well as with organisations and other community members to increase opportunities for inclusion. These community development projects have varied from one-off events and small social groups to region-wide efforts to improve access and inclusion in businesses, schools and public precincts.  

To showcase what they’ve achieved and share their learnings about what works and what doesn’t when it comes to community development for inclusion, we worked with Linkers to develop the Ability Links NSW Community Development Resource Package. The Package is designed for community organisations and local champions, including people with disability and their families, who have an idea for or want to work to support inclusion of people with disability. It is designed to help those both new and experienced in the field.

Community development is a collaborative and local solution to common issues, owned and driven by the community. It recognises community members as experts in their own lives and is grounded in principles of self-determination, inclusion, collaboration, integrity and sustainability. Ability Links NSW takes an asset-based approach to its community development for inclusion projects, valuing and leveraging the assets and strengths – knowledge, skills, cultures, relationships – of communities and people with disability. 

Drawing on interviews and workshops with Linkers across metro, regional and rural NSW, the Package provides information and guidance on:

  • what community development is
  • guiding principles for this community development for inclusion
  • the process of engaging with community to develop a project idea, through planning, managing and sustaining a project
  • working with particular population groups and types of organisations, including case stories and top tips
  • monitoring, evaluating and reflecting on project outcomes to track progress and inform continuous improvement.

For more information, download the Package, check out the promo video, and stay tuned for our upcoming blogs to unpack key ideas in the Package.


[1] Department of Social Services. 2012. https://www.dss.gov.au/our-responsibilities/disability-and-carers/publications-articles/policy-research/shut-out-the-experience-of-people-with-disabilities-and-their-families-in-australia

Photo: St Vincent de Paul Society NSW Inclusion Champions, Andrew and Nick. 


How can NGOs do evaluation on a shoestring?

Aricle Image for How can NGOs do evaluation on a shoestring?

September 2018

By Imogen Williams

We know there are a range of challenges that NGOs can face when it comes to evaluation. With limited time and resources, NGOs are often left conducting evaluation ‘on a shoestring’. 

A recent AES seminar in Sydney explored strategies and tips for conducting lean evaluations. The session was well attended by NGOs, academics, consultants and government.

What challenges do NGOs face when it comes to evaluation?

Attendees discussed the challenges their organisations had faced. Common challenges include:

  • a lack of time or staffing to undertake evaluations
  • limited funding, and limits in budgeting for evaluation in funding applications
  • a lack of evaluation knowledge or expertise within organisations, potentially leading limited methodological rigour
  • expectations from funders that NGOs demonstrate outcomes, leading to a focus on what was achieved, rather than learnings for improvement
  • difficulties collecting evidence beyond outputs, to demonstrate causality or impact
  • embedding evaluation processes across the whole organisation, services and programs and not just for funded services.

One attendee likened evaluation to a diet – “We all know it’s good for us, we occasionally do it, but we often end up lapsing”.

And there was consensus in the room on this. Organisations recognised the importance of monitoring and evaluating what they were achieving for continuous improvement, but sought practical advice on how do this, in a context of having to manage other pressing, on-the-ground demands.

What are some strategies for conducting evaluation ‘on a shoestring’?

To offer some practical solutions, Stuart Loveday, Hepatitis NSW CEO, led the AES seminar and discussed Hepatitis NSW’s approach to evaluation. Hepatitis NSW is an NGO providing support and information services for people undergoing treatment for hepatitis B and C, as well as workforce development and education services.

Loveday said their evaluations were driven by an internal culture of continuous improvement, and needing to report on funding KPIs. However, he identified several monitoring and accountability processes that did not fit under their ‘shoestring’ model, such as their Results Based Accountability reporting and external financial audits.

For evaluation ‘on a shoestring’, Loveday stressed the value of collecting feedback on all of their services and embedding this into service delivery, for example, at the end of a call on their info line, with a postage paid survey in their Tx! Mag, and using volunteers to collect paper surveys at the end of community forums. In their evaluation strategies, Hepatitis NSW make the most of volunteers and offer incentives to respondents. They have volunteers complete data entry into survey monkey. They then report back this feedback in staff and team meetings and use it in business planning and for their internal quality improvement.

It’s important to note that while these efforts help embed a culture of continuous feedback and improvement, they rely on a degree of internal evaluation knowledge to be able to assess what caused outcomes and to make evaluative judgements. 

Other strategies discussed by Loveday and participants included:

  • investing in membership for an online survey platform
  • offering small incentives, such as raffles, for providing feedback
  • having internal staff lead evaluation efforts
  • drawing on volunteers to collect and enter feedback data
  • promoting and encouraging feedback as part of service delivery
  • embedding culture of quality improvement and stressing the importance of measuring impact among staff
  • being clear from the start about what you’re hoping to achieve to guide what data you need to collect
  • building partnerships and relationships with people who know about evaluation.

Where does this leave NGOs?

The challenges for conducting evaluation on a shoestring are ongoing. However, at ARTD we strongly believe that there is great value in building in monitoring data collection and evaluative thinking wherever you can. Embedding ongoing data collection and a culture of continuous improvement will help you understand what you’ve achieved and what you can do to improve.

There are also plenty of resources out there to guide small organisations to self-evaluate, such as the Community Development Resource Package ARTD developed with Ability Links NSW. The Package offers guiding principles and practical tips for low-budget evaluations, as well as links to other useful resources.

If you have any other tips or suggestions for conducting evaluation on a shoestring or would like to partner with us for some evaluation mentoring, we’d love to hear from you!  


Photo Credit: Kevinv033 at Flickr

 


Co-design gives voice to the right people at the right time

Aricle Image for Co-design gives voice to the right people at the right time

September 2018

By Katherine Rich

Policy-makers face many challenges, not least of which are making sure that interventions are used by intended beneficiaries and can be implemented within current systems.

The seminar, Co-Design for Policy Innovation, run by the University of Melbourne’s School of Social and Political Science on 29 August, explored the promise that co-design holds for overcoming these challenges.

The session featured Rebekah Forman, Principal Policy Analyst at Auckland Council New Zealand, who talked about using design thinking to develop public policy and Alastair Child, Director of the Auckland Co-Design Lab, who provided learnings from his experiences using participatory, human-centred approaches to solve complex social and economic challenges.

Having the right people at the right time

Having the right people in the room from the start of the policy cycle was one of the key success factors that both presenters experienced in their work. This helps develop interventions that are:

  1. accessible and appropriate for the intended beneficiaries. Having people with lived experience in the room from the start ensures the solutions are going to be appropriate for and aligned with their needs, improves likely acceptance of the policy, and helps mitigate unintended consequences
  2. implementable, i.e. they are workable within the current systems at play. Having decision-makers, operations staff and service delivery staff in the room ensures that design solutions will work in the current environment and are deployable and accepted.

Alistair emphasised the importance of engaging intended beneficiaries or those with a lived experience in the co-design process – from defining and exploring the problem through to designing the solutions. He structured this approach in four stages: framing the problem, exploring the problem, identifying potential ideas and solutions and testing the solutions for implementation. 

An important learning for the Auckland Policy lab was use of the ‘design solution’. At first, they encountered a ‘system-immune’ response, where formulated interventions were not picked up by policy-makers due to a lack of buy-in or feasibility for implementation. By moving to a model of working with policy-makers upfront, solutions were framed better and provided a clear case for change.

This was also a key element of success for the project Rebekah detailed: a design-led project working with staff across several departments at Auckland Council to investigate links between driver licensing and employment, safety and the justice system. Rebakah made the point that:

People often think co-design isn’t for them – "it’s all pipe cleaners and post-it notes and I’m not creative". The challenge is convincing people that co-design is for them. That it's not about being a designer or policy expert, but about how we frame the problem and the solution, with use and implementation at front of mind throughout the process.

Using co-design in evaluation

At ARTD, we agree on the need to get the right mix of stakeholders with diverse perspectives in the room, so challenges can occur upfront and be worked through in the design phase. Increasingly, we work with stakeholders in the co-design process to develop a program logic that is underpinned by their lived experiences and further supported by literature about how and why interventions will or won’t work. We can then evaluate the potential design solutions against the program logic to strengthen the ideas and ensure the solutions address the problem identified.

Collaborating with stakeholders, we develop indicators for success and systems for monitoring progress and outcomes of the implemented solution. Developing the monitoring and evaluation framework with key stakeholders ensures monitoring and evaluation activities are embedded in the roll out of the intervention. Using an evaluative lens throughout the co-design process increases the probability of success, provides additional opportunities for learnings to be recognised and applied, and lays the groundwork for accountability.

Co-design looks messy and resource-intensive and people aren’t always sure what skills they require to participate. To get the right people in the room we need to be clear about what co-design is (and isn’t) and provide evidence of how experimentation and creativity can contribute to successful outcomes.


Photo Credit: Design Innovation Centre de Competence at Flickr. 

 


Ten tips for presenting like a pro

Aricle Image for Ten tips for presenting like a pro

September 2018

By Gerard Atkinson

At ARTD, we use a weekly professional development and training program to learn about the latest developments in evaluation, share insights from recent work, and build our core skills.

As part of our preparations for the AES 2018 International Evaluation Conference, taking place on 17-21 September in Launceston, Tasmania, I ran a training session with my colleagues on how to prepare, construct and deliver effective and engaging presentations. Here are my top tips.

  1. Prepare your thinking. Preparation is entirely different to rehearsal, and takes place before you even start making your slides. Effective preparation is about identifying what you want to talk about, doing your research, and building a framework for delivering your presentation. Rehearsal, though important, comes much later.
  2. Create an objective statement. To start, develop a single sentence that frames your rationale and scope. A good statement articulates the given time period, what the presentation will achieve, and a call to action.  For example, for our recent lunchtime learning, my objective statement was: “Over the next 50 minutes, I want to cover the key elements of creating and delivering a compelling presentation, to inspire you to go make your own.”
  3. Do your research. This includes understanding the audience you will be presenting to, such as the level of knowledge, the number of people, the level of seniority; the venue you will be presenting in, such as the room size and layout, available technology, and the time of day; and the topic of the presentation.
  4. Develop a presentation framework. Start building the structure of your presentation as a list or storyboard. There are many different frameworks and formats out there. My personal favourite adapts traditional storytelling techniques by following a format of “Open-Body-Close”. It’s a simple framework but can be adapted to presentations of nearly all formats and lengths.
    • The opening section is designed to engage an audience and preview the talk.
    • The body section, which can be repeated for each key point of your presentation, states the point, supports it, and links it with the next point.
    • The closing section reinforces engagement, reviews the topics covered, and provides a call to action.
  5. Kill the deck (if you can). Slides distract. If you can remove a slide, do it. Slides should be used to reinforce and augment the point you are trying to make. Photos and (well-designed) charts do this best, followed by diagrams. If you need to use bullet points or text, keep it short and avoid reading them out verbatim.
  6. Use speaker notes. Scripts can be useful in laying out in exact terms what you want to say in a presentation, but they make it hard to be engaging. Actors train for years to be able to take a script and make it look natural. Instead, use speaker notes, which are a shorthand version of a script. These give you prompts for what you want to say, but enable a more natural style of speaking.
  7. Develop useful handouts. Done well, your slides will not be able to convey the content of your talk on their own. This means that they shouldn’t be used as handouts. Instead, a handout should be a practical resource that turns the key points of your talk into tools that the audience can use later. Importantly, distribute handouts out after the talk to avoid distractions in the room.  
  8. Rehearse, rehearse, rehearse. Rehearsal is about replicating your environment as closely as possible. Find a room, set it up as you will on the day, and rehearse the talk as if it were the real thing. It’ll help you get a feel for your timing and flow and boost your confidence. If you can get some sympathetic co-workers to sit in and give feedback, even better. Repeat this process. The more times you can run through the presentation ahead of time, the more comfortable you will be with the material.
  9. Present with credibility. Credibility is a combination of confidence, character, and charisma. Confidence comes from research and rehearsal. Character and charisma come from the way you deliver your presentation. Some easy ways to build credibility are to use open body language to engage with the audience, and to vary the way you use your voice (tone, volume, tempo). Both go a long way in engaging the audience and carrying them along with you throughout your presentation.
  10. Handle Q&As at the end. Question and answer sessions are often the scariest parts of a presentation because they can be hard to predict. Prior research can help you anticipate and prepare for some of the questions you might be asked. It’s best to keep questions until the end of the presentation, as this helps keeps things on track.  To handle Q&As:
    • Ask: Take a step forward while asking the audience if they have any questions.
    • Select: Select questioners by gesturing to them with an open palm (rather than pointing) or their name, if you know it.  
    • Listen: Give questioners total concentration, eye contact, and actively listen to their question.
    • Repeat: Pause, then repeat or rephrase the question to the whole group to show you understand what they’re asking. This also helps when there’s no roving microphone.
    • Answer: Give others eye contact while answering.

This year, ARTD is sending a large contingent that will present eight talks and two pre-conference workshops at the AES18 conference. I’ll be presenting three papers in three different formats – a traditional talk, a facilitated brainstorming workshop, and an Ignite presentation. I hope these tips can help you prepare, construct and deliver your own presentations with confidence and I look forward to seeing many more presentations at AES18 in September.


Inclusive education is better for everyone

Aricle Image for Inclusive education is better for everyone

August 2018

By Jade Maloney
The right of students with disability to education and to develop to their full potential is established in the United Nations Convention on the Rights of Persons with Disabilities 2006, and reinforced through legislation. There is also plenty of evidence that inclusive education supports positve outcomes not only for students with disability but all students.

But how is an inclusive education for autistic students realised in practice? Aspect’s Autism in Education Conference explored and unravelled this question over the last two days through more presentations than I could count.

A key theme was the importance of student voice and lived experience. The conference kicked off with a panel of autistic students who articulated what worked for them and what didn’t. This message remained with us throughout the conference, with presenters noted the value of co-design in making sure that interventions actually meet user need. Co-design begins with deep consideration of the problem, and brings people with lived experience into the process of prototyping, testing and refining solutions.

From the Conference sessions it’s clear that there is a lot of great practice going on—from building consideration of the needs of autistic students into Schoolwide Positive Behaviour Support, to strategies for creating the social hooks that enable learning, to videos and tools for building work systems and visual schedules. But sessions also raised challenges about differing levels of access to effective practice and the need for holistic approaches.

This was the impetus for a recent project with Amaze, the autism peak body in Victoria, in which we used a codesign approach and systems thinking to develop an Autism Education Strategy. It was a privilege to present on our approach and learnings at the Conference with Braedan Hogan from Amaze.

Because we were creating the strategy at a systems level, what we iterated in the codesign process was a holistic way of thinking about the challenges autistic students face and a framework for addressing these systemically.

We began by developing a root cause analysis. The aim in this exercise is to work back along each causal pathway toward the ‘root causes’ of the problem, so that each of these can be addressed.

In this process, we took care to clearly convey that the root causes of the negative educational and wellbeing outcomes autistic students experience are the barriers created by the attitudes, behaviours and environments that autistic students encounter, rather than the autism traits themselves. We had to avoid the kinds of issues that speakers raised at the conference: of students being expected to fit the system, behaviours not being recognised as a means of communication, and the deficits and disorders’ lens that has been used in the past.

We also took care to capture the range of perspectives about the system. We began with individual interviews with representatives of principals and the education union, autism organisations, autism schools and organisations building the capacity of parents and school staff – to map their views of the causal pathways. We also drew on student experiences identified in the research and the Victorian Parliamentary Inquiry into services for people with Autism Spectrum Disorder and Review of the Program for Students with Disabilities. In the next phase of the project, we will engage directly with students to gain further understanding of their experiences.

While there were overlapping concepts across the individual causal pathway maps, it was only when we combined them and brought stakeholders together through a series of workshops to iteratively refine the root cause analysis that we were able to produce a complete analysis.

This process built a shared understanding, which we were then able to use identify the necessary elements for a strategy that would holistically address the range of problems students face. From there, we worked together to develop an overarching logic for the strategy as well as a monitoring and evaluation framework to measure success.

I’m looking forward to continuing the conversation about inclusive education and exploring the trove of resources shared at the Conference.

 


Building the bigger picture: using the SDGs in evaluation

Aricle Image for Building the bigger picture: using the SDGs in evaluation

August 2018

By Natasha Costa

When evaluating, should we be focusing solely on the program or should we endeavour to consider its entire ecosystem?

At the Australasian Evaluation Society NSW event last month, Michael Reid, Managing Director at The Keyline Group, looked at how we can use the United Nation’s Sustainable Development Goals (the ‘SDGs’) as a framework for taking a ‘systems thinking’ approach to evaluation.

So, what is systems thinking?

Systems thinking recognises that nothing occurs in isolation. We live in a complex and interconnected world, with many interrelated actors and parts. While there is not universal consensus on how to define systems thinking in evaluation, a key underlying concept is that the parts that make up systems are dynamically intertwined and won’t be wholly understood if considered in isolation.[1]

Evaluating an intervention using a siloed approach may mean we miss factors in the surrounding environment that contribute to successes or failures. Take a hospital’s emergency department, for example. Evaluating an emergency department might tell us that 90 per cent of people are waiting less than four hours, which exceeds the KPI set by the hospital. But, without conducting an evaluation of the hospital’s entire system, we may never be aware that achieving this target drew hospital personnel away from the wards and a significant decline in client satisfaction in other areas of the hospital.

Evaluators using systems thinking want to be able to understand the relationships within a system because these can provide the best insight into: (a) what is happening; (b) why it is happening; and (c) how progress can be achieved. To do this, Reid says that we firstly need to define the system itself, and then ask who is at the centre. If we put individuals or community in the middle of a system and then examine the interrelationships between the actors and parts surrounding them, Reid argues that we will get the best understanding of what works and why.

And where do the SDGs come in?

The United Nation’s Sustainable Development Goals (SDGs) were unanimously adopted in 2015 by all countries as a call for action to promote prosperity while protecting the planet. The SDGs comprise 17 goals, 169 targets and 230 indicators; Reid describes them as a ‘global map’ of a system that all countries can use for evaluation.

While providing a streamlined model all countries can apply, countries can tailor the indicators and targets to their specific circumstances. The SDG Transforming Australia project collects evidence about Australia’s progress towards achieving the SDGs, focusing on national issues and problems that we face. The goals, targets and indicators can provide a lens for guiding national evaluations, to better understand the impact of an intervention on the broader system, or vice versa.

The International Institute for Environment and Development has provided five considerations to guide national evaluation agendas using the SDGs. These combine the learnings from the Millenium Development Goals evaluations and use a complex systems perspective.

  1. Think beyond individual policies, programs and projects by examining issues that cut across different sectors.
  2. Examine macro forces influencing successes and failure by considering political, economic, ideological, environmental, socio-cultural and technological circumstances.
  3. Take into account multiple definitions and measures of ‘success’.
  4. Recognise the importance of culture and its influence on societal behaviour.
  5. Shift towards evaluative thinking and adaptive management to recognise flexible approaches to governance and management.[2]

Beyond using the SDGs to frame national evaluations, Reid proposes that we can use the SDGs to evaluate individual programs at both a state and agency level. He thinks this approach will give us the best understanding of how to manage the impacts of a program, whether they be positive or negative. Authentic partnerships between business, government and the community will also provide the best results.

One project that has recently been evaluated in this way is the NSW EPA Organics Waste Programs. The evaluation attempted to map the EPA’s waste programs to each SDG to obtain a comprehensive understanding of their impact and understand any unintended consequences. To provide an example, they identified that aiming to improve the gender makeup in leadership at the EPA may not only impact Goal 5 ‘Gender Equality’ of the SDGs, but also waste management. This is because a greater gender balance in management positions could mean better decision-making processes.

But how feasible is evaluating an entire system for one program?

Reid’s presentation sparked lively discussion among AES members and several people raised issues they thought would come up when using systems thinking at a micro level.

While it’s one thing to use the SDGs to think about impact holistically, assessing causation and impact is a more complex question. Using the example above, how can we be sure that having more females in leadership positions leads to better decision-making processes and, in turn, waste outcomes? There could be a range of factors that are contributing to this.

Relatedly, how do we define and put boundaries around a system? If we were trying to evaluate a program aiming to reduce obesity in young people, how wide would we need to stretch the systems lens?

At ARTD, we see real value in thinking about the impact of a program or intervention on the broader system and looking at unintended consequences that may arise. In fact, Jade Maloney and Katherine Rich are talking about how you can build systems thinking and address systemic barriers through program design at the AES conference in Launceston on 17­–21 September. We look forward to more discussion and unravelling of some of these questions and challenges further.


[1] Mizikaci. (2006). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.466.103&rep=rep1&type=pdf

[2]  International Institute for Environment and Development. (2016). Five considerations for national evaluation agendas informed by the SDGs.  http://pubs.iied.org/pdfs/17374IIED.pdf


Ten tips for building evaluative thinking into team planning

Aricle Image for Ten tips for building evaluative thinking into team planning

August 2018

By Gerard Atkinson

When it comes to strategic team planning, a neutral, external voice can help teams to navigate the process of identifying goals for the year to come and allocating resources effectively.

We’ve found that integrating evaluative thinking can strengthen the process – by developing and reviewing program logics to inform planning activities.

This approach can be viewed in the same way as a long hike – we start out by mapping our path (program planning). When we’re on the way to our destination we stop occasionally, pulling out a compass and map to reorient ourselves (team planning), but, for the most part, we are following the path or the natural boundaries to find our way (day-to-day program delivery). The middle step ­– where we stop and reorient ourselves – is where we link what we do back to the broader program plan and logic.

Our top ten tips

To help deliver an effective planning day that integrates evaluative thinking, here are our top ten tips.

  1. Prepare, prepare, prepare. It is crucial, especially as an external facilitator, to understand the context of the program and the team. Ahead of planning days, we meet with the client to get a sense of their expectations and review background documentation, before developing an agenda and agreeing on appropriate activities for the day.
  2. Take it off-site. Having a different venue can break people out of their regular patterns and enable creative thinking. It also helps people focus on the bigger picture by taking away the temptation to duck back to the desk to respond to emails or get a little extra work done.
  3. Have a neutral facilitator. Having a neutral facilitator, either external to your company or at the very least to your team, has immense benefits for the planning process. First and foremost, it means that all team members have the ability to participate fully in the planning process rather than having someone within the team focused on the logistics of the meeting. Secondly, a neutral facilitator can act as a sounding board that reflects and interprets ideas back to the team to clarify thinking and sometimes introduce fresh interpretations. Finally, a neutral facilitator can moderate tensions in discussions that could potentially derail the planning process. They can help maintain focus on the tasks at hand.
  4. Start with strategy. The first activity of the day (after a quick introduction and icebreaker) is to understand the strategy and rationale for the team. Where available, this involves reviewing the logic model for the team’s programs and the associated outcomes framework. The planning day presents an opportunity to check in and question whether goals and assumptions are still relevant. Placing this activity at the start of the day gets the cognitively most difficult part of the day out of the way and sets a roadmap for the rest of the day’s activities, centred on the program.
  5. Visualise the situation. After clarifying the program logic, move to a more visual exercise to help the team link the abstract concepts of the program and program logic with practical implications for activities and priorities. At our recent planning days, we have asked team members to identify on a whiteboard their key deliverables for the next twelve months, then map out the associated milestones for the year. Even in its draft state (we digitise the results after the session), teams are already able to see bottlenecks in activity and availability. For some team members, this is a moment where they realise the level of work being undertaken by their colleagues, building a greater degree of empathy and support between team members.
  6. Match the activities to the energy levels. We put the more cognitively demanding activities early in the day to match the energy levels of the team. We keep energy levels up throughout the morning by having short breaks for coffee and snacks. Lunch and afternoon sessions are more relaxed and creative, giving people a chance to decompress after the hard work of the morning.
  7. Get outdoors. You can build on the benefits of planning off-site by including an outdoor session that further breaks people out of their usual thinking. At our recent planning days, after a short lunch break we held team building activities and relaxation in a park.
  8. Tailor your team building activities to the team. In our planning process, we put forward a ‘menu’ of team-building activities to our clients, which cover different types, styles and goals. We work with clients to select a set of activities that align with their goals for the team and their teamwork needs. For example, we have used outdoor settings to do breathing and relaxation activities, before some light, physical problem-solving exercises (matching the physicality of the exercise to the capabilities of the team), and a photography challenge where team members pair up randomly and work together to collect a set of images using their smartphones. While all these activities are designed to be fun, each is linked back to the bigger picture by asking the team to reflect on the activity and think how what they did can apply in their day-to-day work. Even when a team manages to solve exercises quickly and easily (and they often will), they can still think about and identify what it was that achieved this outcome.
  9. Bring it together. The final session of the day should be the most laid back, but still productive. It’s a chance to take the outcomes of the first sessions and link them together. The aim is to reduce the abstract logic to daily practice, prioritise the activities that are most important to achieving the goals of the logic, and make sure that the team has resources allocated to support these activities. Having the map of milestones and deliverables helps with this as you can move activities of lower priority and balance resources across the team. While in day-to-day work, it is usually the role of a manager to monitor and execute this, having it as part of the team planning process helps give the whole team ownership of it, and allows team members to feel heard. Seeing the whole team’s activities and allocating resources also gives team members a chance to put their hands up to be involved in work that can contribute to their personal and professional development goals.
  10. Reflect and act. Before everyone leaves for the day, get the team to reflect on the planning process and identify actions that they are going to take as a result. The most successful planning days are those that can be connected into the ‘day-to-day’ as soon as possible. Even though the planning day officially finishes after this, the work from the day doesn’t. Compile the notes from the day, digitise any visuals (such as the map of deliverables) and put it all straight into practice. Also, make something to remember. For a recent engagement, we got all the photos from the team challenge and compiled them into a poster, which we presented to the team as a memento of the day and their successful work.

Applying these tips, you can work with teams to deliver planning days that help focus thinking and achieve goals, as well as build team cohesion and rapport. More than this, we have been thinking how we can apply some of these ideas in our day-to-day work here at ARTD. We’re using our own staff as testers for new team building activities – these break-out sessions may just become a regular part of the organisational culture.


New frontiers for BI: beyond scaling

Aricle Image for New frontiers for BI: beyond scaling

August 2018

By Jack Cassidy

For those catching the train in Sydney, you may have noticed some new billboards on your morning commute:

“9 out of 10 train customers pay their fares”.

Fare evasion on Sydney’s public transport network is costed at about an $84 million loss per year. This ad, designed by the Behavioural Insights Unit (Department of Premier and Cabinet NSW), is a clear attempt to reduce fare evasion. Will it work? And if so, how?

In Sydney, the penalty for fare evasion is about $200.[1] I pay about $25 per week on public transport – that’s $1,250 a year ­– and I am fortunate enough to live very close to where I work. So, my weekly spending on transport is on the low end of the spectrum. I’ve seen ticket officers on my commute twice in the last 12 months. This might tell me that, even if I was caught on multiple occasions each year, I’d be better off financially by not paying my fare.

So why tell people that 9 out of 10 people are paying their fare? Why not a campaign that tells people about the $84 million of lost revenue caused by fare evasion? Why not increase the fine to ensure that people who never pay their fare cannot get rewarded for doing so?

The theory is that we are likely to substitute a rigorous decision-making process, maybe based on a cost-benefit analysis, for a ‘rule of thumb’. We have busy lives and the world around us can be a noisy and confusing place. Using a rule of thumb is going to save us time.  When we’re told that almost everyone pays their fare, we’ll probably just pay our fare. Not because we’re concerned about public debt. Not because it’s the right thing to do. But simply because we think other people are doing it too.

The use of social norms to change behaviour is just one type of intervention in a much larger academic discipline: Behavioural Insights (BI). The growing use and future challenges for BI in public policy were discussed at Behavioural Exchange 2018 (BX18) in Sydney on the 25th and 26th of June.

Welcome to BI

It’s the buzz word. Its big names – Kahneman, Tversky, Thaler and Sunstein have become part of lunchroom chatter in public policy. And it’s winning more and more traction in the public service, with BI units now operating within federal and state governments across the OECD, particularly the UK and Australia.

There’s reason why. BI interventions have produced impressive results from around the world. The UK’s Behavioural Insights Team (BIT), by switching to an opt-out enrolment scheme, achieved a 6.1 million person increase in the number of workers saving into workplace pensions.[2] BIT’s interventions have also led to improvements in the timeliness of people’s tax declarations, leading to significant increases in forwarded payments in the UK and Guatemala.[3] And in a most iconic example, Schiphol Airport in the Netherlands achieved huge cost reductions in bathroom cleaning expenses through the installation of tiny black fly-shaped stickers in the middle of its urinals.

It’s time to go big

There was a general consensus at BX18, including David Halpern (BIT, UK), that it’s time for BI to ‘go bigger’. The new frontier of BI, according to many, was the challenge of ‘scaling up’.

Scaling up is the process of expanding an intervention from an initial successful pilot to a larger population that will achieve the same results as the pilot on a much larger scale. This is extremely difficult, but highly necessary, as the scaling up of BI interventions hasn’t always gone to plan – with promising effect sizes petering out. In Taking Nudges to Scale, Robyn Mildon (Executive Director, Centre for Evidence and Implementation, University of Melbourne) said that sometimes the way in which a program is administered by a particular organisation in its pilot stage can be a key variable. When the program is rolled out elsewhere these administrative variables are overlooked, leading to reductions in effectiveness. Sometimes, interventions can be too complex to either roll from pilot to larger scale within the prescribed budget or to transfer to a new context.

To address the problems of scaling up, John A. List introduced his theory, ‘the science of using science’ in the Plenary on Behavioural Insights in Regulated Markets on day two. His speech involved some complicated statistical concepts, but essentially emphasised the importance of replication. According to List, the issues encountered when scaling-up pilots, especially the unexpected loss of effect size, can be largely nullified by having 4–5 successful replications before scaling up. An intervention retaining strong results across replications can show that interventions are ready to ‘go big’.

This is where the BI experts’ ‘heads are at’ in terms of new frontiers for the discipline. This focus on scale is great, but are there more ways to think bigger about BI?

From programs to systems

It’s a fair observation that the use of BI in public policy so far has been primarily program focused – that is, focused on interventions that attempt to change one particular behaviour or a small set of behaviours in a particular setting (such as a city or region) and/or among a particular population group.

But at BX18, tackling more complex problems was on the agenda. In the Taking Nudges to Scale break-out session, an audience member challenged the exponents of BI to apply their methods beyond programs, to systems. After all, what is the point of cost-effective interventions within the wider context of systemic issues? If an intervention doesn’t address a primary cause of a behaviour, what type of consequences will the intervention have?

A key difference between program-focused and system-focused approaches is scope. NSW Transport’s current strategy assumes that a primary cause of people’s fare evasion is something happening during the decision-making process of whether to or not to pay for a fare. Is this a satisfactory explanation of fare evasion? Maybe people in Sydney are less likely to pay their fares because the cost of public transport, and more generally the cost of living, is high.[4] There might be a larger context to fare evasion, perhaps a deeper systematic cause, that the BI Unit’s new campaign hasn’t taken into consideration. Taking a systems-focused (rather than program-focused) approach means addressing the wider scope of a particular behaviour.

Good intentions don’t always lead to good outcomes

BI interventions, even those that produce positive results, might have unintended consequences when the scope of the intervention is too narrow. An emerging body of evidence from Social Psychology suggests successful nudges can come with secondary unintended behavioural changes that can undermine the value of the intervention.

This phenomenon is known as ‘moral licensing’, and it might have large implications for BI. In a break-out session on day two, Daniel Effron, delivered an interesting presentation about this effect, Morality, Decision-Making and Compliance. The central insight was that when people do good deeds, it can ‘free’ them to do bad things in the future. Effron referenced an interesting study from his career on the left-leaning campus of Stanford University (Effron, Cameron & Monin, 2009). Participants were previously identified as having voted for Kerry in the 2004 election and Obama in 2008. One group was asked who they voted for in 2004, and the other who they voted for in 2008. They were then given a scenario in which they had to decide whether a black or white police officer should be given a job working in a department with a history of racial division. He found that people who said that they voted for Obama were more likely to say that a white officer should be given the job in the police department with a history of racial division. It appeared as though allowing people to endorse Obama gave them some kind of ‘license’ to be racist in the follow-up question.

Unintended moral licensing effects have also been found in environmental conservation campaigns. One study found that a campaign implemented to encourage households to use less energy achieved a reduction in energy consumption of about 6 per cent, but during the same period, household water consumption increased by a similar margin (Tiefenbeck, Staake, Roth & Sachs, 2013).

It’s possible that nudging people to do the right thing may free them to do the wrong thing elsewhere. Maybe encouraging people to pay their train fare might free them to evade the fare on the bus or the ferry. While this doesn’t mean that all nudges are pointless or self-defeating, it could suggest that interventions with a narrow scope – those that target changes in a single or small set of behaviours – may have unintended and counterproductive outcomes.

What does this mean for BI?

This is a tricky question. Testing for unintended behavioural changes during the pilot and replication stages of an intervention is certainly a good place to start. Well-conceived nudges that have been tested for unintended consequences are highly valuable in public policy and should continue to be pursued. What happens if we find sustained unintended outcomes for a particular nudge that undermine the value of the intervention and the targeted behaviour change is really important (e.g. reducing energy consumption)? Can BI still be effective where broader behavioural change is required?

The use of BI to target broader behavioural changes could involve the design of complex interventions that use multiple nudges targeting a range of behavioural changes to achieve broader policy outcomes. If we want to make it easier for people to behave in environmentally sustainable ways, we could use a set of particular nudges to decrease water and energy consumption and increase recycling. I don’t know of peer-reviewed research that has examined complex interventions such as these (please tell me if you do), so I ask your permission to make a few hypotheses…

I’m reminded of something Cass Sunstein said at BX18 – that BI can help us navigate the complex world around us, by making it easier to make decisions that are in our interest, despite the noise. Complex interventions with multiple nudges could have little behavioural impact due to sheer saturation, becoming part of the background ‘noise’ in which people make decisions. For example, ARTD evaluated the impact of a BI-informed trial a few years ago that sought to influence a series of household decisions using multiple nudges. In this case the results showed no differences between treatment and control.

BI units may also look to deliver powerful, ‘one-shot’ interventions that target a wider scope of behaviour; there may already be single nudges that lead to broader behavioural improvements. Unintended consequences do not have to be bad and measuring for them might show unexpected broader improvements for single nudges. Moral licensing is still a compelling issue, but this effect may have its own specific set of conditions that can be mitigated through evidence-based design.

It might also be the case that achieving broader behavioural change requires interventions that activate higher-level cognitive functions that can’t use the cognitive biases on which BI is traditionally based. This could lead to a whole new sub-discipline of behavioural economics.

BI’s experimental approach is having a positive impact on public policy. While RCTs aren’t always appropriate, they place a higher level of accountability on policy makers to question if their interventions are working. RCTs can often be the ultimate test of whether an intervention works. There are some very interesting BI-informed interventions taking place right now, including the expansion (or scaling up) of a program aimed at improving compliance with ADVOs – administered by the Aboriginal Services Unit (NSW Department of Justice) and the NSW BI Unit[5]. And what about the fare evasion reduction intervention from NSW Transport? Will it work?

At ARTD, we are excited to hear the results of these interventions. We have a keen interest in BI and want to see how it plays out in more complex contexts.

 


[1] https://transportnsw.info/tickets-opal/fines-fare-compliance

[2] https://www.behaviouralinsights.co.uk/uncategorized/setting-smarter-defaults-for-workplace-pensions/

[3] https://www.behaviouralinsights.co.uk/international/results-from-bit-tax-trial-in-guatemala/

[4] http://www.eiu.com/topic/worldwide-cost-of-living

[5] https://www.nsw.gov.au/news-and-events/news/aboriginal-domestic-violence-program-expands/

 

 


Evaluation: a mirror for social change

Aricle Image for Evaluation: a mirror for social change

August 2018

By Rachel Aston

What role can and should evaluators play in reducing the inequalities that perpetuate social disadvantage?

The 2018 Evaluation for Change: Change for Evaluation ANZEA conference held in Auckland from 16–18 July was focused on tackling this question and asked attendees to consider what changes they could make in their practice to contribute to societal improvement.

Keynote speakers focused on the power of stories and the value of recent and emerging areas of evaluation and program practice, including co-design. Marcus Akuhata-Brown, a highly experienced educator, speaker and current University of Melbourne Atlantic Fellow, delivered a particularly resonating message. He offered thought-provoking reflections on the importance and value of connection and place for New Zealand Indigenous communities when engaging in the evaluation process, and this process can reinforce efforts to change social structures that exacerbate inequality and affirm power hierachies.

Lifting the lid on social inequalities and hierarchies of power

Marcus called on evaluators to facilitate opportunities for communities to connect through place and culture, to use their strengths and identify what is needed to reduce social inequalities. Providing an outline of his whakapapa, Marcus showed how a process of tracing family lineages can not only be empowering, but also a practice to connect to place and culture. In the context of evaluands targeting social inequalities for Indigenous communities, this could be a valuable method to incorporate in the engagement process and could also support program design and development.

He noted that 'evaluators should be opening up conversations, rather than finding ways to narrow and limit them'. 

The keynote and indeed the conference theme itself reflected the close relationship between the practice of evaluation and the subject of the evaluation, where evaluators can implement practices that reinforce the aims of the program, policy or intervention and assist in the design process. Recent social change practice frameworks, including co-design and Collective Impact, also illustrate this.

Evaluating ‘lifting the lid’ efforts

In my presentation with Ruth Aston, University of Melbourne and Robbie Francis, Director of The Lucy Foundation and a social change practitioner in Pluma Hidalgo Mexico from the Lucy Foundation, we tackled questions associated with the impact measurement in evaluations of social change efforts.

There is significant growth and increased application of a variety of approaches to understanding social impact and social change. Concepts of Collective Impact, co-design, human-centered design or design- thinking and principles-based evaluation can all reinforce and be a foundation for social impact measurement.

As well as these, social science perspectives and fields from intervention design, Implementation Science, Behavioural Insights, Realist Evaluation, and process evaluations that accompany Randomised Control Trials also reinforce the importance of implementation monitoring and quality intervention design for the impact of social change efforts.

Indicators for measuring progress towards social impact

In Ruth’s review of 7,123 interventions taking action on social inequalities in health outcomes, variables associated with implementation and program design—such as fidelity, dosage, implementation quality, reach and sustainability—were found to be moderators of intervention impact.

When these variables were present, the magnitude of impact on social outcomes increased. Variables included cultural relevance in the content, delivery and intentions underpinning the intervention - specifically, the intervention content (what it contained), how it was delivered (communication, language used, mode of delivery), and what participants did (behaviour change, session attendance) needed to be relevant to the culture (ethnicity, local community) participants identify with.[1]

All identified variables were factor analysed, and two clustered factors emerged:

  • intervention design
  • intervention implementation.[2]

The findings illustrated that despite the complexity of social change efforts, intervention design and implementation are related to long-term impact. Therefore, monitoring and evaluation need to, at least in the early stages of social change efforts, be focused in this area.

Practical implications for evaluating ‘lifting the lid’ efforts

Robbie reflected on the findings of the research, noting that it has been challenging to demonstrate their progress to commissioners, funders and supporters. Measures associated with intervention design and implementation would support in-field adaptation and continuous improvement. However, this would require advocacy of the worth of such measures, as those interested in funding the work of the Foundation can have an unrealistic or Westernised perspective of what social change outcomes might be feasible within a timeframe, and what social change outcomes matter to the communities the Foundation works with. For instance, working with standardised tools to measure quality of life (QALY) tend to be irrelevant to the communities Robbie works with, but is favoured by funders.

She also shared that the biggest enabler of social impact measurement was the need to ensure that the evaluation process mirrors the principles of the social change effort itself, again highlighting the relationship between evaluation and the subject of the evaluation, and how evaluation can support change. For the communities The Lucy Foundation work with and people with disabilities, Robbie advised that evaluation should be underpinned by the ‘nothing about us without us’ principle.

This principle could be applied to work with other communities and highlights the critical foundation of social impact measurement, which is the involvement of all those involved in the social change endeavour in evaluation, including those who fund and commission the evaluation.

Continuing the conversation

Many questions remain about social impact measurement and evaluation, and we will be continuing the conversation at the AES conference. Come along to The Promise Design-thinking and Implementation Science holds for Social Impact Evaluation: Views from Practitioners and Evaluators at Chancellor 6, Hotel Grand Chancellor Launceston on Wednesday 19 September from 1:30-2pm.

For more information on The Lucy Foundation, visit: www.thelucyfoundation.com or follow www.facebook.com/thelucyfoundation.


 [1] Aston, R. (2018). Creating indicators for Social Change in Public Health. Unpublished PhD thesis, University of Melbourne

[2] Ibid.


Masterminding an evaluation approach

Aricle Image for Masterminding an evaluation approach

July 2018

By Ken Fullerton

Conducting a program or policy evaluation is not always a straightforward process. But bringing stakeholders together to discuss and reflect on the best approaches to tackling evaluation challenges can really help.

This is what happened on Thursday 28 June, when evaluators, non-evaluators and other industry stakeholders attended the latest AES Learning Lab in Sydney, focused on Real-Life Evaluation Challenges. Placed into small groups, attendees were asked to dive deeper into a selection of real-life evaluation challenges that they, their organisation or others, are currently experiencing, using the Mastermind approach.

What’s the Mastermind approach?

The Mastermind approach involves one group member briefly explaining their evaluation challenge in 5 to 10 minutes. This might be as simple as seeking clarity on a small aspect of a larger evaluation or as complex as questioning the overall approach. Other members are then encouraged to probe, ask informative questions and make suggestions based on their own experiences, interests and knowledge. Suggestions may be low or high cost––a recommendation to contact ‘person x’ or ‘organisation y’ might be as beneficial as one to make use of a completely new evaluation approach. 

While there is no obligation to act on anything put forward, the Mastermind approach aims to open people’s eyes to things they hadn’t considered or connect them to new networks and resources.

What were the results?

For one participant, the process reinforced the value and worth of the approaches and methods that they were already using. Learning that other evaluation professionals would go about things in the same or similar way can be reassuring, given the complexities and challenges involved in evaluation.

Another participant was surprised at the variety of responses and suggestions. Some, she and her organisation had considered, while others were completely new, such as exploring the possibility of engaging further with the AES’s network to increase survey and interview response rates.

Others found the Mastermind approach helped them to think ahead about potential challenges and identify some solutions, such as thinking about program design and data collection early on. This led to further group questions about whether any data collected would be relevant for different parts of the program or if different types of data were required.

The flexibility of the approach appealed to many, as it can be used at any stage of an evaluation, across all sectors and by all levels of staff. It could just as easily take place in workplace kitchen chat as it could in a formal workshop environment.  

What’s the use?

At ARTD, we are big supporters of reflective practice and keen to make use of the Mastermind approach. Sharing your challenges and being open to feedback can lead to innovative, practical or useful suggestions that could improve your work. Understanding why things have not worked as expected can also be more powerful than learning from success (read more on how sharing failures strengthens evaluation).

The next AES Learning Lab session will be on ‘Evaluation in complex settings: taking a systems approach with an eye to the United Nation’s Sustainable Development Goals’ and will take place in Sydney’s CBD on 26 July 2018.


Where to next for behavioural insights?

Aricle Image for Where to next for behavioural insights?

July 2018

By Jade Maloney

Behavioural insights have risen rapidly on the agenda of governments around Australia and the world. This year, there are just over 200 nudge units across the world.

Now that “nudge” units have been integrated into the machinery of government, behavioural economists are setting their sights on new frontiers. The buzz at Behavioural Exchange 2018, held in Sydney on 25 and 26 June, was all about where behavioural insights was headed next.

Top of the list were tackling more complex social problems, harnessing new technologies and machine learning, increasing interdisciplinary collaboration and scaling.

Can algorithms be accountable?

Artificial intelligence and algorithms have potential to assist governments in addressing complex problems. For example, the UK’s Behavioural Insights Team has used tech-based approaches to analyse social worker case notes and outcomes to understand factors that indicate a need for intervention. Combined with discussions with experienced social workers, this is informing training for new social workers.  

Australia’s own Data 61 at CSIRO is exploring the use of integrated datasets, machine learning and the potential for personalisation. While Stats NZ have an Integrated Data Infrastructure that allows data linkage across agencies, which many an Australian researcher would love to access.

If you’re wondering about the ethical dilemmas these new approaches give rise to, Bill Simpson-Young, Director of Engineering and Design at CSIRO, set out five handy principles for accountable algorithms: Responsibility (i.e. a human is responsible), Explainability, Auditability, Accuracy and Fairness.

When an audience member asked about government’s responsibility to share the results of integrated data analysis, such as that undertaken by Stats NZ to identify key factors in childhood that relate to poorer outcomes in adulthood, Michael Sanders of the UK’s Behavioural Insights Team raised the need to ensure that analyses do not reinforce negative expectations and create self-fulfilling prophecies (e.g. by telling people that they fit the criteria for poor life outcomes).

Behavioural insights and design thinking – best friends or barely able to relate?

Not to neglect the other big buzzword in government these days, the conference also explored synergies between co-design and behavioural insights.

Nina Terrey of Thinkplace set out the different foundations of design thinking and behavioural economics:

  • worldviews (social constructionist vs logical positivism)
  • approaches to problem solving (abductive vs inductive and deductive)
  • processes (participatory and dialectic vs expert collaboration)
  • approach to systems (system disruption and collaborative generation of solutions vs identifying ways to make existing systems work more efficiently and effectively).

Sometimes these differences can give rise to tensions. But the disciplines have been able to forge partnerships because both enable deeper learning and designing of solutions to complex policy problems.

Dilip Soman, Professor at the University of Toronto's Rotman School of Management, suggested the two are two sides of the same coin – both begin with empathy.

Who are the stakeholders?

Both behavioural insights and design thinking have a focus on understanding stakeholders’ perspectives (albeit in different ways), so it’s unsurprising that consultation was on the agenda. Martin Parkinson, Secretary of the Department of the Prime Minister and Cabinet, kicked off the conference by calling on the public service to make better use of evidence and to learn from failure. He suggested that almost all problems in policy arise because we haven’t thought about all of the stakeholders – not only the end users, but the decision makers and practitioners.  

Cass Sunstein, Professor at Harvard University, and one of the key authors in behavioural insights similarly identified public consultation as important to informing government decision-making, not an exercise in legitimation.

Where was evaluation?

There were plenty of references to randomised control trials (RCTs and debate about what constitutes evidence. Professor in Economics at the University of Chicago, John A List proposed that 3-4 well-powered, independent RCT replications are required before scaling up an intervention. Deputy Secretary, Economic at the Department of the Prime Minister and Cabinet, David Gruen agreed on the importance of evidence but noted the need for timely action and that “the truth is only one special interest group in policymaking and not particularly well funded.”

But there was little reference to the broader discipline known as evaluation – and its range of approaches to answering different questions in different contexts. For the relatively straightforward kind of interventions and the kinds of questions behavioural insights trials have engaged with to date, the focus on RCTs has been appropriate. But will RCTs have the answers, as behavioural insights moves to tackle more complex problems in complex dynamic systems?

Our experience in evaluation suggests a broader repertoire of approaches and engagement with system dynamics will be needed. Evaluation has long acknowledged and frequently included economic evaluation in its repertoire. Will behavioural insights start to draw more on evaluation expertise? 

Here’s hoping a deeper relationship between evaluation and behavioural insights practitioners will be one of the new frontiers.


Sharing failures strengthens evaluation

Aricle Image for Sharing failures strengthens evaluation

June 2018

By Gerard Atkinson and Melanie Darvodelsky

We all like to share success stories, but the fact is that we can learn just as much – and sometimes more – from talking about when things don’t go as planned.

This was the subject of the Australasian Evaluation Society NSW event on Wednesday May 30, “Learning from evaluation failures”. The event was run by two experienced evaluators who each shared a previous case of evaluation “failure”, where the client had difficulty accepting the findings of the evaluation.

The cases

Case 1: The evaluator found out that the number of participants who transitioned from institutional care into a support program was zero. When the evaluator presented this finding at the final meeting, the client questioned its accuracy.

Case 2: The evaluator worked with the client from the beginning to identify and agree on which data would be used to measure outcomes, but as the project progressed, the client seemed to value their internal data over external sources. At the close of the project, the evaluator pointed to external data to say that the program objectives were not met. However, the client disagreed and used their internal data to hold to their view.

Evaluators at the session formed small groups to discuss “What could the evaluator have done to prevent or minimise this negative result?”

What could have been done differently?

Gaining acceptance of and action on negative findings is tough. This is unsurprising given the evidence that people tend to accept information confirming their views and refute information that challenges their views.

The key issue identified from both cases was a need to bring people along on the evaluation journey. It appeared that in the first example, the evaluator operated alone, which may have exacerbated the negative reaction at the close of the project. In the second case, the evaluator and client did not stay on the same journey despite their initial agreement. Working in and maintaining partnership with stakeholders is an effective way to prepare them for and ease their acceptance of negative findings, as well as increase their sense of ownership for the project and the next steps needed to create change.

Evaluators identified a range of practical ways to work in partnership with these stakeholders that may have led to more positive project outcomes.

  • Communicate regularly and proactively throughout—this can range from formal check-in meetings to an informal understanding to communicate key information as it comes to light. What is important is that there is a shared awareness of the methods being used and key results as they emerge.
  • Engage and get endorsement of primary users—engaging senior management decision-makers, seeking to understand their expectations about outcomes, and gaining their endorsement of at the outset can help to reduce risks.
  • Understand the context—a key element of utilisation-focused evaluation is an appreciation of the context (political and programmatic) in which an evaluation takes place. The priorities, needs, and preconceived expectations of stakeholders can shape how an evaluation is developed and ultimately accepted. Even with regular and proactive communication, if the program team has a vested interest in the evaluation producing positive results, negative findings can create friction.
  • Re-frame negative findings—framing negative or contradictory findings as lessons and opportunities for improvement can help pave a way forward.
  • Identify the potential for negative findings at the outset—it is just as important to ask clients what failure would mean and how they would respond as it is to ask what success would mean. This helps to identify expectations, enabling you to frame how you communicate activities and results so that stakeholders feel part of the journey, and are empowered to make changes as a result.

These strategies fit with the findings ARTD Partner, Jade Maloney’s research on evaluation use. However, Maloney’s research also identified that these strategies can fail when working with organisations that lack a learning culture and when findings are politically unpalatable.

The strategies also align with Michael Quinn Patton’s Utilisation-Focused Evaluation. Quinn Patton’s approach provides a framework for evaluators to maximise the intended use of evaluations by users, even where the results of an evaluation may not match what program staff or management expected.

Let’s keep sharing

Evaluators’ candour in telling their stories and allowing other evaluators to consider how we can collectively achieve greater use of evaluations is a positive contribution to evaluation practice. It builds on the growing conversations in the field, such as those seen at the AES 2017 conference in Canberra, and in Kylie Hutchinson’s recent book “Evaluation Failures”.

We’re keen to continue the conversation – this year’s AES Conference will be a great opportunity.

Resources

Hutchinson, K., “Evaluation Failures”, 2018

Patton, M., “Utilization-Focused Evaluation”, 4th Ed., 2008

Patton, M., “Utilization-Focused Evaluation (U-FE) Checklist”, 2013

Ramirez, R., and Brodhead, D., “Utilisation focused evaluation: a primer for evaluators”, 2013


How can governments harness citizen input in decision-making?

Aricle Image for How can governments harness citizen input in decision-making?

June 2018

By Ruby Leahy Gatfield

Engage2’s panel discussion in Sydney this week left us asking important questions about how governments can use new tools to better engage and listen to citizens, and importantly, how we can measure the impact of public engagement activities on government decision-making. 

The Vivid Ideas event, Democracy is being Disrupted: Governing in the 21st Century, brought together leading experts in democracy building and community engagement to discuss what participation in a representative democracy should look like and the many new tools and methodologies available for building stronger democracies.

Tom Burton, publisher for The Mandarin, opened the event by asking how our institutions are and should be responding to the global phenomena of democratic disruption. 

A burning platform

Alan Dupont, CEO of the Cognoscenti Group and political strategist, then warmed up the panel by creating a “burning platform” to spur change, as asked by Engage2 Managing Director, Amelia Loye.

Dupont explained that a recent rise in populism and democratic backsliding in countries around the world have led everyday people, particularly young people, to question the value of democracy. He went on to outline five causes of democratic disruption that he has observed:

  1. Macro-system instability and the demise of the Pax Americana
  2. Digitalisation—the rise of IT and social media, which have both facilitated democratisation and shone a light on our institutions’ imperfections
  3. Rising inequality—which has created distrust in government and the value of democracy
  4. Increased unregulated migration—which has divided public debate and led to civil unrest and disenfranchisement
  5. Urbanisation and population growth.

Dupont concluded that “while democracy is being disrupted, it is not beyond repair”. This echoes International IDEA’s recent research, The Global State of Democracy, which found that, since 1975, the world has experienced significant democratic progress – particularly in terms of clean and fair elections, respect for human rights, checks and balances on government and citizen engagement. However, this progress has slowed significantly over the past decade. The report concluded that we are now at a crossroads and need to adapt our processes and institutions to safeguard democracy.

How can we better engage citizens?

Dupont’s remarks were followed by a lively panel discussion between Burton; Dupont; Loye; Elizabeth Tydd, NSW Information Commissioner and CEO of the Information and Privacy Commission NSW; Daryl Karp, Australian Museum of Democracy; Iain Walker, Executive Director of the New Democracy Foundation; and Jamie Skella, Co-founder of Horizon State.

The discussion highlighted the need for governments to not only engage the disengaged, but to genuinely listen to and deliberate with the many Australians who are engaged, but don’t feel they can influence government decision-making or policies.

This is particularly important in the context of the Open Government Partnership, an international initiative to ‘secure a commitment from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance’. Australia became a member of the Partnership in 2015 with the launch of its first National Action Plan.

To help honour our commitment to the Partnership and strengthen our democracy more broadly, the panel discussed many emerging technologies and methodologies for effective citizen engagement. These range from sophisticated artificial intelligence and data mining techniques to analyse qualitative feedback, through blockchain voting technology, to face-to-face methods. Face-to-face methods can enable people to deeply engage, exchange view points and build shared understandings.

When engaging citizens, they stressed the importance of reaching representative samples; breaking down information for people to digest and thoughtfully consider; asking open-ended consultation questions without predetermined policy responses; using co-design as a genuine method (not a buzzword); and using a multifaceted approach—both online and face-to-face. Tydd observed that local councils often engage particularly well through on-the-ground consultations.

Measuring the impact of engagement

As monitoring and evaluation specialists, the panel left us asking some fundamental questions about how we can measure the impact of citizen engagement activities.

To what extent are governments effectively engaging citizens? Is the feedback collected in consultations publicly reported? How do we know whether this feedback is influencing decision-making and policy design? And where it isn’t, is the rationale communicated?

We know from experience that feeding back how data is used is key to effective, ongoing engagement. So, it is critical that agencies monitor and evaluate their engagement activities to answer these questions, be transparent and accountable, and ultimately, build greater trust in democratic processes.

Stay tuned as we set out to answer some of these questions in upcoming research.


The many uses of theory in evaluation

Aricle Image for The many uses of theory in evaluation

June 2018

By Alexandra Ellinson and Jade Maloney

The word ‘theory’ is often bandied about by evaluators. But they’re not all talking about the same thing. And some authors on evaluation don’t even think theory is necessary for evaluation.

Like our Senior Manager and former evaluation lecturer, Brad Astbury, we think theory is useful and can be used in evaluation in multiple ways.

Donaldson and Lipsey in The Handbook of Evaluation identify three broad types of theory in evaluation.

  1. Evaluation theory or theories of evaluation practice. These are the ideas about what evaluation is and how it is practiced (descriptive theories) and/ or ideas about what evaluation should be and how it should be practiced (normative theories).
  2. Social or social science theories: These are the explanatory frameworks that draw on research evidence about how people usually behave or how systems or organisations function, generally or in specific contexts. These are important because in evaluation we are often considering how an intervention affects individual behaviour change or systems change.
  3. Program theory: These are theories about the specific program, intervention or policy being evaluated, which aim to present a plausible account of how and why a program is expected to work.

While it’s helpful to clarify these different uses of the term ‘theory’, at ARTD we tend to prefer the term ‘approaches’ when referring to descriptive accounts of what evaluation is or normative accounts of what it should be. This is because these ideas (e.g. participatory or utilisation-focused evaluation) are not providing causal explanations—something that we take to be an essential feature of a theory—but rather are either positing assumptions about the ontology of evaluation (what it is), its epistemology (how we can be confident that evaluative claims are accurate), and/or are stating principles for good evaluation practice. Even theory-based approaches to evaluation are not themselves evaluation theories, but rather a certain way of doing evaluations, one that commences with a set of (hopefully evidence-based) assumptions about the nature of the thing being evaluated and how the intervention is expected to cause outcomes.

Terminology aside, it is very useful to articulate one’s approach to evaluation, not only to ensure that it is consistent and coherent, but also to build a shared understanding about the approach with a client or stakeholders (especially if their close engagement in the evaluation is required). It is also part of fostering a community of professional practice. 

We also think that evaluations can and should draw more on social science theories. While it is common for evaluations to involve a review of literature related to the content area of the program, there is not always a systematic approach to the application of existing knowledge in the field to evaluation.

Increasingly, we are looking at how social science theory can feed into program theory. We examine the evidence about programs that are built on similar bodies of psychological or social research. This helps us assess the program’s evidence base, identify adjustments and helps us focus our evaluative effort on the gaps in the evidence base.

We also consider ‘negative program theory’ to identify how an intervention could potentially possibly result in an opposite or negative outcome. And draw on the research to manage expectations about the timeframe in which outcomes may be observed.

So what does this look like in practice?

In an evaluation of a peer support program for students with disability and mental health issues, we drew on the evidence base about how peer support models work to empower participants. We also considered how the program could result in social exclusion and increased anxiety rather than social inclusion, and identified how this risk would be managed in the program design.

In an evaluation of an intervention designed to reduce antisocial behaviour, we drew on criminological literature about deterrence effects as well as emerging psychological evidence about what makes people more responsive to regulation. By doing this, we could explain the pattern of outcomes and make recommendations about how to better target the policy to people for whom deterrence is most likely to be effective, while minimising potentially negative unintended consequences.

In an evaluation of an intervention to support victims of crime, we combined social science and realist theory. The diagram at the top of the page provides a simplified overview.

We are keen to hear from other evaluators about how they use 'theory' and reassured by the recent turnout to Brad Astbury's AES on theories of evaluation. 


Agile evaluation, can there be such a thing?

Aricle Image for Agile evaluation, can there be such a thing?

May 2018

By Alexandra Lorigan

‘Agile’ has emerged as one of the latest buzzwords in Government Departments. But what does it mean to be agile, and can we do more agile evaluation?

On Wednesday 2 May, Florent Gomez from the NSW Department of Finance and ARTD Partner Jade Maloney delivered a free lunchtime seminar in Sydney organised by the Australasian Evaluation Society (AES).

Drawing on ideas from an article by Caroline Heider at the World Bank, Florent introduced the Agile project management methodology, and participants then discussed how this could apply to evaluation, if at all. 

So, what does it mean to be ‘agile’ and is there a place for it in evaluation? 

Agile originated in the IT world as a project management methodology that uses short development cycles. In 2001, 17 software developers formalised the approach into a set of 12 key principles known as the Agile Manifesto. From the first two principles, it is clear that at its core, the Agile approach is customer-centred, deeply collaborative and constantly adapting. This is in contrast to the traditional ‘waterfall’ approach to IT project management, where the project plan is designed at the outset and then followed in sequence, with little flexibility for customer input.

Another key aspect of the Agile approach is the commitment to speed and efficiency, which Moira Alexander highlights in her article, ‘Agile project management: A comprehensive guide’. According to Alexander, the desire for rapid adaptation and optimal design, requires both simplicity and a high level of self-organisation and accountability by teams.

Though the Agile methodology originated in the software industry and continues to boast an adoption rate of 23%, it has since been used by a number of other key industries. Among these is government, which uses the methodology on roughly 5% of its projects. 

Florent became interested in the concept when he joined a government agency, where many projects were delivered based on the Agile methodology and the ‘A’ word was heard everywhere. In his new role as internal Evaluation Manager, the expectation was also to evaluate these projects in an agile way, meaning, within very short timeframes.

In her article, Caroline Heider suggests how the concept of agile could be translated to evaluation practice. In addition to narrowing the scope of the project, she suggested evaluation could be more agile, or efficient, by drawing on:

  • existing and standardised datasets
  • algorithms for collecting, organising and presenting data
  • electronic data collection methods, such as online surveys
  • effective project management skills and tools.

Interestingly, most participants agreed that Heider’s suggested approaches to shortening response times are already widely practiced in the evaluation world.

So do AES members see the potential for evaluation to be more agile?

With this grounding, it was turned over to us evaluators to envisage whether we could see the potential for more agility in our work. Specifically, we were asked to consider the benefits, enablers and risks to evaluation being more agile.

Participants agreed that by being more agile, we could make evaluation more focussed, responsive, creative and ultimately, produce more useful products. However, they acknowledged that being able to make evaluation agile would depend on:

  • having the necessary IT systems and the skills to use them
  • whether the project needs ethics approval, which would limit any potential to change processes
  • the level of buy-in and engagement of clients in this particular approach
  • having the right structures and processes in place to facilitate such flexibility.

While Heider’s primary message was that there are ways to make evaluation more agile, both she and AES members acknowledged the risk of quality loss. Participants expressed fears that agile had the potential to become too ‘quick and dirty’ to produce meaningful results. They noted that evaluators may risk avoiding slow, but often necessary, methods of data collection in favour of faster, possibly unsuitable, methods. Additionally, participants identified the risk of both scope creep, which affects project budgets, and scope narrowing, which could limit capacity to make well-informed recommendations.

So, where does that leave us?

The workshop generated useful discussion and allowed evaluators to consider how they be more agile without compromising the quality of their work. 

Participants identified clear synergies between Agile project management and developmental evaluation—developing a program in real-time through close consultation with program staff—and utilisation-focussed evaluation—conducting evaluations with a focus on intended use by end users.

As firm believers in our own ‘lunchtime learnings’, ARTD looks forward to attending more of these short and engaging lunchtime sessions in the future. You can visit the AES website for a full list of upcoming events.  


Using Evaluation Theory to Inform Practice

Aricle Image for Using Evaluation Theory to Inform Practice

May 2018

By Brad Astbury, Senior Manager

Evaluation is a young discipline, especially when compared to other fields of inquiry like sociology, economics and psychology. Even so, there exists a rich intellectual history and vibrant body of theoretical knowledge that continues to grow and evolve. When I was invited to deliver a workshop on ‘Theories of Evaluation’ for the Australasian Evaluation Society (AES) autumn intensive, there was one take-home message I wanted to convey—if evaluators are not tapping into the 60-year repository of hard-won lessons, they are missing out on considerable wise counsel. This ‘wise counsel’ can greatly improve the quality and utility of evaluation practice.

Without knowledge of evaluation theory, we are susceptible to repeating mistakes of the past and relying on little more than professional folklore and an assortment of methods with no guiding principles for their application.

The formal aims of the workshop were to provide participants with a better understanding of:

  • the nature and role of evaluation theory
  • major theorists and their contribution
  • approaches to classifying evaluation theories
  • key ways in which evaluation theorists differ and what this means for practice
  • dangers involved in relying too heavily on any one particular theory
  • techniques for selecting and combining theories based on situational analysis.

 As a passionate advocate of the late Will Shadish’s maxim ‘Evaluation theory is who we are’, I remain committed to the view that evaluation theory is central to our professional identity. Here’s some tough love from Shadish (1997):

If you do not know much about evaluation theory, you are not an evaluator…to be an evaluator, you need to know that knowledge base that makes the field unique. That unique knowledge base is evaluation theory (pp. 6-7).

So what exactly is evaluation theory? A broad answer is that evaluation theory is the body of writings about evaluation that have at their core a concern for practice. Another response is to view evaluation theory as a set of prescriptions about what constitutes ‘good’ evaluation and how it ought to be conducted (as detailed in alternative evaluation models or approaches).  Yet another perspective considers evaluation theory as comprising several meta-components including use, knowledge construction, valuing, social programming and practice. In my view, all three framings are important and should be considered within an integrated perspective on evaluation theory.

During the first part of the workshop, we examined these different ways of thinking about evaluation theory, drawing on a conceptualisation that I developed a few years ago to help postgraduate students circumnavigate the ‘theory jungle’. As part of the discussion, we compared the ‘big seven’ theorists: Donald Campbell, Michael Scriven, Robert Stake, Carol Weiss, Joseph Wholey, Peter Rossi and Lee Cronbach, as well as other ‘intellectual heroes’ that have found a place on Alkin’s infamous theory tree.

In the second part of the workshop, I presented an example of how a realist theory of evaluation can be useful for guiding practice, especially when the purpose of the evaluation is to support program improvement and transferability. I also offered some insights from Ernest House about how research on cognitive thinking and bias can prevent evaluators from making fundamental errors when drawing evaluative conclusions. As a group we brainstormed strategies to help determine when and where to use different evaluation theories and approaches, given considerations of time, budget, data, stage of program development, and so on.  There is no single or ideal theory of evaluation that will work always and everywhere. It is critical to select and combine approaches in response to situational contingencies.

In the final session, I emphasised the importance of evaluating evaluation theory and considered different criteria that might be useful for reflecting as a discipline on which theories of evaluation are ‘better’ than others. Should we continue to ‘let a thousand flowers bloom’ or is diversity of evaluation models and approaches leading to fragmentation of the field? My own view on this is that we need to get better at determining which theories of evaluation have merit and which are whimsical fads and fashions (or worse, harmful ‘snake oil’).  Evaluators need to reflect an evaluative gaze back on evaluation itself.

I invite readers of this blog to reflect deeply on two questions presented to participants during the workshop.

  • Who/ what has had the most influence on how you conceptualise and conduct evaluation?
  • What guides your current evaluation practice? 

As part of the reaction this exercise triggers, I hope that evaluation theory becomes an increasingly prominent resource that you draw upon to inform everyday practice and decision-making.

Thanks to the AES for hosting and organising the autumn professional learning program and to the many participants who attended workshops delivered over this inaugural three-day event. 


Knowing for whom and how is as important as knowing what you’re achieving

Aricle Image for Knowing for whom and how is as important as knowing what you’re achieving

April 2018

By Partner, Jade Maloney

Settlement Services International (SSI) is one of the largest providers of Ability Links – a NSW Government-funded initiative that empowers people with disability, their families and carers to work towards their goals by building on their strengths and connecting with their local communities, and supporting local community and mainstream organisations to become more inclusive.

SSI has Linkers in over 40 LGAs, and works in partnership with Uniting and St Vincent de Paul NSW in all metropolitan Family and Community Services Districts as well as in Illawarra/ Shoalhaven and Southern NSW. SSI’s delivery locations include the Local Government Areas (LGAs) with the largest CALD populations in NSW.

From the state-wide evaluation of Ability Links, SSI knew the initiative was achieving positive outcomes for individuals and a return on investment for the NSW Government. What they didn’t know was how they were supporting outcomes for the diverse individuals and communities they supported.

In late July 2017, SSI engaged ARTD to evaluate their delivery of Ability Links – with a focus on benchmarking their performance against the program as a whole and understanding how they were supporting outcomes and what improvements could be made.

To understand the ‘how’ underlying SSI’s outcomes for all of the individuals it was supporting, as well as individuals from CALD communities, we used a realist-informed approach – identifying theories with an evaluation steering group and testing and iteratively developing these through a series of interviews with Linkers employed by SSI and community organisations, and finally a participant reference group.

The state-wide evaluation had already engaged with people supported through Ability Links, and SSI was engaging with the people it was supporting to develop a book of their stories (published in Our Community: Stories of Courage Strength and Determination) and conduct a longitudinal Participant Wellbeing Study. So we had to be careful to make use of available data to avoid creating an additional data collection burden, while still putting people with disability at the centre of the evaluation – following the philosophy of ‘nothing about us without us’. 

We were able to do this with the participant reference group. With the assistance of two language interpreters, a physically accessible venue, and a discussion approach that was inclusive for people with a vision impairment – we were able to talk through, test and refine the emerging ‘theories’ about how SSI’s Ability Links supports outcomes with eight people who had accessed Ability Links, as well as identify improvements. This process helped to ensure the evaluation team interpreted the findings in context.

The evaluation identified a range of factors supporting positive outcomes, some of which were unique to SSI – such as their workforce of Linkers from diverse cultural and language backgrounds embedded within their communities – and some of which were tied to the flexible, person-centred and responsive nature of Ability Links. The evaluation also found that Linkers supported outcomes for individuals in varying ways – depending on their starting points, needs and goals. In some cases, people come with ideas and Linkers help to make these happen, while in others, Linkers help to turn people’s interests into ideas for community connections. Linkers can also build people’s confidence in varying ways: through the encouragement of a Linker or through social connections.

SSI is using the findings to inform its service delivery and recently shared its learnings at the DiverseAbility Conference. You can access a summary of the findings of the evaluation on SSI’s website.


Putting logic back into program logic

Aricle Image for Putting logic back into program logic

April 2018

By Consultant, Ken Fullerton

Are the programs logics that you see actually logical?

On Wednesday 11 April 2018, ARTD Partner Andrew Hawkins, delivered a free seminar in Sydney organised by the Australasian Evaluation Society (AES), which was attended by approximately 50 people.

Hawkins first briefly introduced his subject. His generalist definition of a program logic is “A one-page diagram or model of the important components of an intervention and how they work together to deliver outcomes.”

He then gave attendees three example program logics to use as references in their discussions of two key questions.

Question 1: What makes program logics logical?

  • They represent a narrative or ‘plausible story’.
  • There is an expectation that activities in a program logic will lead to particular outcomes.
  • They are not business plans and, due to limited space, do not map out every potential input, activity, outcome, etc.
  • They should be meaningful to various stakeholders.
  • They represent a thoughtful process or ‘journey’ describing how an organisation can go from Point A to Point Z, and any anywhere in between.
  • Different program logic formats, styles and colours may appeal to some but not to others (however, whichever styles are selected should be used thoughtfully).
  • Being aware of when to use a particular agency or organisation’s accepted template format so that the organisation itself can more easily understand the program logic.
  • Trying to be too logical in a program logic can actually hamper innovation.

Question 2: What do the arrows mean in a program logic?

  • They lead to or influence a particular activity or outcome.
  • Sometimes they seem to mean here’s “Where the magic happens” rather than a logical link.
  • They are assumptions about what needs to occur to allow an organisation to go from Point A to Point B.
  • They provide direction around expectations and what has to happen in an organisation.
  • They can indicate where an evidence base justifies a connection.
  • They represent sign-posts for the reader showing where the story starts and where one must look to next.

Later, Hawkins gave an overview of his own approach to program logic. He argued that a ‘configurationalist’ understanding of causality would be more useful than the ‘successonist’ understanding deployed in many program logics. His point was that effective programs are better thought of as a ‘causal package’ with certain assumptions, like a recipe, rather than a linear ‘causal chain’ where one element in the program logic causes the next one.

Hawkins believes a program is better thought of as a proposition or argument that a course of action will be sufficient to bring about a desired result, rather than a theory about change or a theory about action.  He said that while theory was very important for providing reasons, justifications or ‘warrants’ for elements of the program design (and for the program as a whole), he thinks it is too much for a program logic diagram to display this theory.

Instead, he proposed focusing on the conditions that program activities needed to achieve that, together, would be enough to enable an intended outcome to be achieved. He argued that this approach enables critical thinking (that can support realistic design) and evaluations focused on measuring outcomes a program is sufficient for achieving its objectives. It can also support the program’s argument or business case.


Governing the politics of evidence – Book review

Aricle Image for Governing the politics of evidence – Book review

April 2018

By Senior Manager, Alexandra Ellinson

The Politics of Evidence: From evidence-based policy to the good governance of evidence, Justin Parkhurst, Routledge Studies in Governance and Public Policy, 2017.

As evaluators, we want to generate credible evidence that is relevant and useful for informing public policy decisions that improve social outcomes. So, it’s unsurprising that Justin Parkhurst’s recently released book—or more specifically, it’s subtitle, ‘From evidence-based policy to the good governance of evidence’—caught my eye.

This book is a helpful reminder of the imperative that evidence-based policy making addresses the reality of politics head-on; and that this can (and should) be done without giving up on rigorous research. As Associate Professor at the London School of Economics and Political Science, Parkhurst shares in the concerns of both those who champion evidence in the face of the politicisation of science, and those who critique the de-politicisation of policy making that can occur when social values are obscured through the acceptance or promotion of only limited evidence sources and methods. In what follows, I briefly outline the key points and conceptual moves that Parkhurst makes in this book, and highlight what I take to be most relevant to policy makers and evaluators alike: his argument that there is a need for advisory systems that normatively embed the good use of evidence in policy making.

Part I opens with an exploration of what is meant by the oft-touted phrase, ‘evidence-based policy’. Parkhurst clearly unpacks the usefulness and limitations of framing policy making simply around the aim of ‘doing what works’. Starting from the premise that evidence matters a great deal for good policy, Parkhurst gives examples of what goes wrong when there is a lack of information or poorly used evidence e.g. the public health advice that babies should sleep on their fronts, which continued for decades despite mounting data about the dangers of this practice. Yet he also acknowledges the limits of technocratic cries for ‘more evidence!’ to inform policy decisions that are essentially principle or rights-based e.g. access to reproductive health choices or same-sex marriage provisions.

Parkhurst goes on to discuss the value of evidence from well-designed research methods with nous, and displays an appreciation of methodological pluralism when it comes to appropriately understanding social issues. This includes an important, but by no means preeminent, place for randomised controlled trials. He also gives a quick nod to realist evaluation by challenging readers to think about more than just ‘what works’, but also consider what works ‘for whom’ and ‘where’. While devotees of realist approaches might take issue with how he employs the term ‘mechanism’ in this discussion (given its very precise meaning in the field), they are likely to endorse the spirit of his argument.

Interestingly, Parkhurst doesn’t discuss or appear to make a distinction between evidence-based and evidence-informed policy: he primarily uses the first term, although at times the latter. While this seems unusual insofar as evidence-informed policy is often put forward as a more pragmatic and politically tuned-in alternative, I think this is also a smart move on his part. It works to avoid definitional debates and micro-arguments around the degree of political influence before something is described as ‘based in’ or ‘informed by’ evidence, but it also allows him to locate his concerns as part of a bigger picture that applies across cases.

Part II contains an interesting exploration of the role and functioning of various forms of bias that impact not only on what and how evidence is used by policy makers, but also the evidence that is funded and generated by researchers in the first place. Parkhurst discusses overt biases in pursuit of political interests as well as the subtle politics of cognitive biases. Perhaps most useful is a distinction that Parkhurst makes between ‘technical biases’ in the use of evidence and ‘issue biases’ in how evidence is deployed to inform political debates. In relation to technical biases, Parkhurst serves readers well by including both the invalid use of individual methods (i.e. poor scientific practice) and the failure to appropriately include relevant information from multiple sources. In relation to issue biases, he thoughtfully highlights questions around political legitimacy and representative politics in relation to how evidence is deployed to shift debates towards or away from issues/ areas of inquiry, often in a non-transparent way. 

On an initial reading, this discussion about biases in Part II is interesting but not obviously essential in contributing to Parkhurst’s argument. On a closer reading, however, it becomes clear that the inherent risk of biases—and hence the need to develop a systematic response to mitigate these dynamics—provides the rationale for why, in Part III, Parkhurst advocates for a governance approach to improving how evidence is used in policy making.

Part III is the most intriguing part of this book. Although it feels somewhat abstract and leaves the reader in want of more concrete examples, I suspect this is an artefact of his principles-based approach to institutional change—one that favours ‘guided evolutionary incrementalism’ rather than programmatic planning. The strength of the concluding chapters is how they challenge policy makers and evaluators to think critically and in new ways about how institutions, within and outside of governments, operate systemically to shape the evidence that is gathered and in turn how it is used in decision making.

Parkhurst starts by looking for principles that constitute good evidence beyond the well-worn technical hierarchies, and constructs a framework of appropriateness through which policy-relevant evidence might be considered. In doing so, Parkhurst’s defines ‘appropriate evidence’ as that which speaks to multiple concerns at stake in a policy decision, is constructed in ways that are most useful to achieve policy goals, and is applicable in the local policy context. In turn, ‘good evidence for policy’ comes to be defined as evidence that meets the aforementioned appropriateness criteria, as well as high-quality research standards.

Before presenting conclusions around the good governance principles for the use of evidence, Parkhurst next turns to a discussion on the factors necessary to ensure the democratic legitimacy of the ‘evidence advisory system’. While a vital discussion to be had, I’m unsure whether Parkhurst succeeds in making a new contribution to the long-standing debates and voluminous literature on the topic of whether (or to what extent) we can accept irrationality as the cost of democracy. With this issue parked, the book concludes by outlining key principles for the good governance of evidence.

Taking a broad understanding of good governance as the ‘art’ of systems and processes through which collective decisions are made, he argues that governance needs to be thought of in terms of both the processes and outcomes that are relevant to the use of evaluation with a policy process.

To do this, eight good governance principles for using evidence in policy making are elaborated

  1. appropriateness (the relevance of evidence to decisions or alignment to context)
  2. quality (the methods through which evidence was established)
  3. rigour (the comprehensiveness and synthesis of information)
  4. stewardship (the formal and public mandate of advisory systems)
  5. representation (that policy decisions about evidence lie with democratic representatives)
  6. transparency (that it is clear how decisions are made based on what evidence)
  7. deliberation (that there is public engagement around contested values that may affect what and how evidence is used)
  8. contestability (that the evidence used and decisions made are open to critique, peer review and scrutiny).

These principles appear sensible and provide a comprehensive response to the issues/ concerns raised throughout the book. Yet I can’t help but wonder whether Parkhurst has gone far enough and fully achieved a description of principles in a governance framework around the use of evidence: to my mind, the principles seem closer to what might underpin a quality assurance checklist. Parkhurst makes a brief reference to the use of governance in the corporate sphere but does not pursue this enquiry—arguably, this might be where some of the key lessons and fresh insights lie. This governance perspective would throw into focus a slightly different set of issues, such as the delegation of decision making (e.g. around the evidence that is gathered and how it is used); the strategies for managing risks (e.g. around the lack or inappropriate use of evidence); the duties owed to those with a ‘stake’ in the evidence (e.g. to the public or potential program beneficiaries, that decisions are made in their interests); and expectations around transparency and reporting (e.g. ‘rules’ safeguarding quality information and communication flows in deliberative processes).

Overall, the strength of Parkhurst’s argument is the explicit engagement with the political questions in determining what constitutes the better use of evidence. Doing this allows him to recognise that improvement requires building institutional arrangements (however incrementally this may be) that can address forms of bias while incorporating democratic representation. Readers may still want to know more about the ‘how’, and in the Australian and especially Canadian evaluation contexts one can’t help but be reminded of growing calls for an Evaluator-General to oversight the better use of evaluation evidence. These live debates highlight the importance of Parkhurst’s book in contributing to new ways of thinking about the twin problems of evaluation utilisation and the use of reason in democratic politics.

Accessible online: http://eprints.lse.ac.uk/68604/1/Parkhurst_The%20Politics%20of%20Evidence.pdf  


Growing specialist disability accommodation

Aricle Image for Growing specialist disability accommodation

March 2018

By Consultant, Ruby Leahy Gatfield

Ever considered establishing or investing in specialist disability accommodation, but not known where to begin? In 2017, ARTD worked with Frankston Peninsula Carers (FPC) to develop a series of resources offering practical advice on how community members can form an association and partner with housing and support providers and investors and funders, including local government, to grow accommodation options. We also showcased some of FPC's success stories to show what can be achieved!

The National Disability Insurance Agency (NDIA) recognises that some people with disability who have very high support needs or significant functional impairments will require specialist housing solutions. However, the current supply of specialist disability accommodation (SDA) is substantially below what is required to meet demand. Moreover, as parents caring for adult children with disability are ageing and reaching a point where they can no longer support their children at home, the need for specialist housing options is increasing. To address this, the NDIA SDA Payments aims to stimulate investment and increase innovative supply of housing, including housing models that foster inclusion and have improved design and technology to support people with disability to live independently.

However, these Payments are not intended to meet the housing needs of all people with disability and growth in the broader market of accessible and affordable housing options is critical. But how do interested families and communities even begin growing housing options in their local community? What are the key steps to consider? How has this been done successfully in the past? Where can they turn for investment?

FPC is one inspiring example of a community-based organisation established to address the needs for SDA in Victoria’s Mornington Peninsula. The organisation is run solely by volunteers; some of whom are carers of adults with disability. Since 2007, FPC has successfully sourced over $8 million in donations and government investment to increase housing options for people with disability in their local region.

Having received funding through the NDIS Sector Development Fund in 2017, FPC engaged ARTD Consultants to develop a series of resources to support families to establish housing projects and attract the interest of potential funders and developers. After consulting FPC’s Committee and key partners we developed:

  • a checklist for forming an organisation to grow SDA
  • a checklist for creating a SDA project from start to finish that answers frequently asked questions (such as where to find land, how to approach developers and investors, key design features and policies to be aware of, and how to run the property)
  • a template for a one-page project summary to present to stakeholders in early discussions
  • a template structure for a project proposal
  • a template and tips for developing a Memorandum of Understanding
  • a description of each of FPC’s housing projects to show interested community members what they can achieve when working together.

The resource package has been distributed to local members of parliament, Mayors, Ministers, housing providers, donors and families of people with disability and received positive feedback as a clear and useful tool. For more details and to download the resource package, visit the FPC website.

You can also support the work of FPC by donating to their current McCrae housing project – a well-located property designed to accommodate six people with varying disabilities. FPC has raised $35,000 so far and is is seeking to crowdsource $90,000 more to complete the property. You can read more and donate or share their crowdsourcing efforts here: https://donate.grassrootz.com/fpc/help-build-mccrae-house.


Why you shouldn’t hate a role play: stretching your interview skills

Aricle Image for Why you shouldn’t hate a role play: stretching your interview skills

March 2018

By Partner, Jade Maloney and Senior Consultant, Kerry Hart

Ever had an interviewee give monosyllabic answers or talk non-stop on an entirely different topic? What about someone who becomes overwhelmed by the conversation and perhaps even cries?

Over years of interviewing, we’ve encountered a range of challenging situations like these. Because people are people – with different values, beliefs and experiences, in different contexts – there are unfortunately not many hard and fast rules about how to respond (besides things like being authentic, respectful, and non-judgemental). This is what makes interviewing and focus group facilitation so challenging. But it’s also what makes it engaging, exciting and energising.

On April 22, the participants in our interviewing skills workshop for the Australasian Evaluation Society (AES) took turns role-playing challenging interview situations – testing out strategies to get an interview back on track and ensure interviewees feel comfortable and safe to share their views.

A reluctant participant (e.g. consistently gives one-word answers or shrugs their shoulders):

  • Get comfortable with silence. If you wait long enough (but not too long), the interviewee may step in to fill the void. It may also be that the person needed time to collect their thoughts before responding more fully.
  • Give them the reins. The person may feel your questions are not getting to what matters for them. Ask them what they’d like to tell you about.
  • Lighten the mood, be humorous, if the context is appropriate. This might break the tension.
  • Leave your specific questions aside; discuss a related topic. If the person is feeling uncomfortable with the interview situation, this can provide some time out to build trust.
  • Call the situation for what it is. Note that the person seems uncomfortable being there or that something seems to be bothering them. Give them an opening to share the thinking behind their behaviour. It may be important to understanding how the program or policy you’re reviewing is working.
  • It’s also important to recognise that sometimes, no strategies will work, and the person has a choice not to talk.
  • Be particularly careful when deciding to call on people who haven’t contributed in a focus group. There may be underlying group dynamics that you’re not aware of that are making this person feel uncomfortable sharing. Sometimes it can be better to catch the person on their own at the end of the group to see if they had anything to add

A tangential talker (e.g. needs to tell you about their key concerns before they can get into the interview questions; starts telling you about their entire career history when you ask them about their current role; or talks about all of their friendships when you asked them if they enjoyed a particular social activity):

  • If a person needs to get something off their chest at the start of an interview be respectful and listen. Sometimes taking an extra 10 minutes for this can mean the rest of the interview just flows.
  • Generally, don’t cut someone off in the middle of a sentence. But sometimes you may need to, particularly in a focus group where one person is dominating and others are feeling uncomfortable.
  • Recognise that sometimes a person on a tangent is actually trying to tell you something. After a while you may find them looping back to the topic at hand.
  • Tell that that you want to come back to something they said earlier (that was on topic), which was really interesting for the evaluation. This can be a good way to gently steer things back on track.
  • Tell them that you are conscious of their time and other commitments, but you want to make sure you capture their views on the key questions, so you would like to focus on these, so that you have their full input for the evaluation.

A person who becomes emotional or distressed:

  • If you think an interview context could potentially raise an emotional response, be prepared and prepare your interviewee. Let them know that some questions may be confronting, that they can choose not to answer and can take things at their own pace. Have contacts for supports you can refer to in place, if needed.
  • Give the person space. Ask if they’d like to take a breather or a longer break, come back to the interview at another time or end it there. Give them the choice. Don’t decide for them.
  • Lower your voice and slow things down.
  • Have boundaries, but remember you’re human. An emotional response can sometimes be appropriate when interviewing people experiencing life challenges.

Of course, there’s a need to understand the individual and the context in applying any strategies. The value of role playing isn’t that you’ll have perfected your exact response to any given situation. What it can do is help you to develop the agility to respond authentically and appropriately to the individuals and situations you encounter.

But, as one of our participants pointed out, the other value that stepping into an interviewee’s shoes provides is the ability to see things from their perspective. Having had this experience can make you pause next time you encounter a challenging situation and think what might be going on for the person. The person is probably trying to tell you something with their behaviour and non-verbal cues. Be attuned to this and open to their perspective.

Who said they hate role plays?

We really enjoyed our day running workshops on interview skills and questionnaire design for the AES. We also run tailored workshops for organisations. You can find out more by calling 02 9373 9900.


Taking a trauma-informed approach to research and evaluation

Aricle Image for Taking a trauma-informed approach to research and evaluation

March 2018

By Maria Koleth

Last week, all of ARTD’s staff attended a challenging and informative training day on Trauma-informed Care and Practice with the Blue Knot Foundation.  As a company that takes a strengths-based approach to research and evaluation with vulnerable populations, ensuring that our methods and instruments are trauma-informed is a key part of our process. The Blue Knot training renewed and updated our understanding of types of trauma, its long-term embodied effects, and the five principles of trauma-informed practice. Some key ways we are putting these principles into practice in our research include:

  • Prioritising safety: With a clear understanding that there is a widespread incidence of people impacted by trauma throughout the community, prioritising safety is an important part of our standard practice. Our interviewers and focus group facilitators establish clear boundaries around their role and the focus of discussions.  We also establish clear response protocols in case a participant becomes distressed, including referrals to other services if they need more support. The training also reminded us to continue to support our own staff who are vicariously exposed to stories of trauma and to ensure regular opportunities for them to debrief.   
  • Being trustworthy: We set clear expectations when getting consent from participants and then we ensure that we follow through on what we said we would do. Embodying trustworthiness also involves getting to appointments on time and staying within the boundaries of the research. This is important to establishing trust and a space in which people feel comfortable to share their story.
  • Offering choice: Maximising the control that evaluation and research participants have and being flexible in accommodating their needs is important for trauma-informed practice. We work to offer participants as many choices as possible, from where interviews take place to whether they would like a support person to attend with them to which questions they choose to answer.
  • Taking a collaborative approach: Increasingly, we have been using collaborative approaches. These help to address the unequal relationships between researchers and participants by making research projects a collaborative space.
  • Empowering participants: The theme of empowerment runs through many of our projects and the projects we evaluate. We recognise that people who have experienced trauma have often had their experiences minimised or invalidated in the past, so it’s important that we express trust in their responses in interviews and focus groups, recognise their resilience, and emphasise the importance of their contributions to projects.

We would also like to thank 107 Projects for hosting us –  their wonderful garden and sense of hospitality provided the most recuperative setting for our training day (see 107.org.au). 


Outcome Mapping: unpacking the black box between outputs and impacts

Aricle Image for Outcome Mapping: unpacking the black box between outputs and impacts

March 2018

By Ruby Leahy Gatfield

My recent internship at the International Institute for Democracy and Electoral Assistance (International IDEA) in Stockholm raised a number of important questions for me about how to monitor and evaluate international development programs. Trying to demonstrate a program’s ‘impact’ can often feel overwhelming, given that development goals are long-term and complex processes dependent on a myriad of political, economic, cultural and other factors. And while the methods for measuring, monitoring and evaluating impact remain hotly contested in the development community, two key approaches stood out in my time at International IDEA.

The first is the logical framework approach. Anyone involved in planning, monitoring or evaluating international development programs will be familiar with ‘logframes’. Pioneered in the 1970s by USAID to demonstrate what donor money was achieving, logframes have proved a very useful tool for mapping out and thinking critically about how a program leads to results.

Logframes provide a line of sight on the causal links between a program’s activities, outputs, outcomes and ultimate impacts. They offer a sense of simplicity and structure in an otherwise complex environment, can be used to communicate intentions to stakeholders, enable standardised reporting on indicators, and allow independent monitoring and evaluation of results (among other benefits).[1]

But what do outcomes really look like? How do the activities and outputs of a program lead to development impacts? To unpack this ‘black box’ in the logframe, outcome mapping (OM) has emerged as an increasingly popular methodology.

What is outcome mapping?

OM recognises that development programs are all about social change, and that social change is complex, unstable, non-linear, two-way, incremental, cumulative and often, beyond our control. Conducting evaluations in these open and changing environments poses a myriad of challenges, from defining success and deciding when to evaluate, to capturing emerging objectives and establishing cause and effect.[2]

To tackle these challenges, OM provides a framework for unpacking a program’s theory of change and collecting data on outcomes as they unfold. Importantly, it redefines ‘outcomes’ as changes in behaviour—the actions, activities and relationships—of the stakeholders directly in contact with the program (known as ‘boundary partners’). [3] This concept of boundary partners is fundamental to OM but not always present in logframes and, as a result, the two approaches often produce very different outcome statements. According to OM, behavioural change of boundary partners is critical to moving up the results chain. 

OM also recognises that, in reality, programs have limited control over whether their ultimate goal is achieved, given the range of social, political, environmental, economic and other factors that support or hinder intended outcomes. Rather than claiming attribution of a development impact, OM claims contribution to it.[4] It teaches that programs have control over their inputs, activities and outputs; influence over their outcomes; and simply an interest in the ultimate impact. In short, OM focuses on a program’s sphere of influence.[5]

In practice, OM offers 12 tools for planning, monitoring and evaluating outcomes, which can be adapted to suit individual contexts. These tools are intended to help stakeholders identify and think critically about:

  • why the program has ultimately been established
  • who the program has direct influence over (who the boundary partners are)
  • what changes in behaviour (outcomes) the program would ‘expect’, ‘like’ and ‘love’ to see from its boundary partners
  • what the qualitative and quantitative indicators (‘progress markers’) are for these outcomes
  • what strategies are in place to influence each boundary partner
  • which monitoring tools the program should use
  • what the evaluation priorities are for an evaluation plan.

It helps to build a credible picture of how a program contributes to results, putting people at the centre of development and recognising the complex and non-linear nature of social change.

So while the logframe approach remains engrained in most development agencies, practitioners should consider the value in an OM approach. As put by Michael Quinn Patton, OM affirms that ‘being attentive along the journey is as important as, and critical to, arriving at the destination’.[6]

To learn more about OM, visit the Outcome Mapping Learning Community, a one-stop shop for all things OM.


[1] http://www.focusintl.com/RBM083-2_Logical_Framework_Approach_and_Outcome_Mapping.pdf

[2] https://www.outcomemapping.ca/resource/webinar-introduction-to-outcome-mapping

[3] https://www.outcomemapping.ca/resource/outcome-mapping-a-method-for-tracking-behavioural-changes-in-development-programs

[4] http://www.focusintl.com/RBM083-2_Logical_Framework_Approach_and_Outcome_Mapping.pdf

[5] https://www.outcomemapping.ca/resource/webinar-introduction-to-outcome-mapping

[6] https://www.outcomemapping.ca/download/OM_English_final.pdf - Page 1


Evidence and persuasion

Aricle Image for Evidence and persuasion

March 2018

By Manager, Katherine Rich

Why do economic arguments hold sway in public debate? I recently attended the thought provoking Melbourne School of Government’s conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking. In a panel discussion, Economist Richard Denniss spoke about the disproportionate weighting given to economic evidence and its persuasive power in public debate. It got me thinking about why this is so.

In their simplest form, economic arguments appear easy to understand and are compelling. To illustrate this, Denniss related a story of his son asking him if he would take him to Disneyland. When Dad said ‘no’ and used an economic argument – ‘it’s too expensive, we can’t afford it’ – his son innately understood and, for the most part, accepted the decision. However, as Denniss pointed out, the concept of Disneyland not being affordable is really a value judgment rather than an objective fact. The reason Denniss’ family didn’t go to Disneyland was because they had other priorities to spend their money on.

Economic arguments, such as those made through the use of cost-benefit analyses (CBAs), can seem objective and easy to understand even though they are not – with values concealed behind a veneer of expertise and a language that not everyone understands. I agree with Denniss’ suggestion that rather than pretending a cost-benefit analysis was value neutral, advocates of particular causes should start from their value position and then make an economic case for their argument.

So what does all of this mean for evaluators when evaluative arguments are complex and can be difficult for non-evaluators to follow? We could leverage some of the same power of economic argument – make our evaluative judgments appear value neutral. However, in trying to make a holistic judgement about the merit and worth of a program, it would be problematic to use only one quantifiable metric like a cost-benefit ratio.

What we can do is bring together diverse stakeholders to first understand their perspectives and then develop a comprehensive set of criteria to assess value (see my recent post on this – a balanced approach to valuing in evaluation) and develop a logic model to clearly communicate what success will look like and how it will be measured. We can strengthen the persuasive power of these models, by drawing on social science research to develop and refine them.

When it comes to making economic arguments in evaluation, we can also look more broadly than cost-benefit analysis. Julian King’s recent publication OPM’s approach to assessing Value for Money sets out an approach to measuring value for money (VfM) that goes beyond using blunt, readily quantifiable measures like CBA, and acknowledges that some of the most valuable outcomes can be the hardest to quantify. It claims that good VfM assessments are clear about the value judgments being made. The approach uses an equity lens to capture not only the economy, effectiveness, cost effectiveness and efficiency of an intervention but reach to those most disadvantaged, acknowledging this may be costlier than reaching moderately disadvantaged people but can have greater impact.   

Economic arguments have power not because they are objective, but because they appear value neutral. As evaluators we can advocate for greater transparency of economic metrics and more nuanced approaches to VfM, and we can be explicit about how stakeholder values influence criteria and, thus, evaluative judgements.


Co-design as the reimagining, repositioning and redistribution of expertise

Aricle Image for Co-design as the reimagining, repositioning and redistribution of expertise

February 2018

By Jade Maloney

The idea that there might be a crisis of expertise in policymaking – a questioning of the role and legitimacy of expertise – is challenging for a public policy consultant. But, for an evaluator, it’s a given that evaluative evidence is only one piece in the policymaking puzzle. We might want it to have more weight, but we know that it must work in the context of politics and the democratic process.

So it was interesting to hear the various takes on the theme at Melbourne School of Government’s recent conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking.

Keynote Professor Sheila Jasanoff kicked off day one by calling into question the ‘deficit model of the public’ in the context of the rise of alternative facts. Lay people can evaluate complex information and have their own knowledge that should be valued; we need to find ways to engage them in the democratic and policymaking process. To get to the point where we can imagine alternatives, we also need to acknowledge power structures, bridge traditional binaries and speak across disciplines.

Several speakers at the conference recognised co-design as one of a suite of tools to engage citizens in policymaking processes. I presented on our growing use of this approach to help ensure policies and programs better address core problems by engaging end users in deep consideration of the problem and an iterative process of prototyping, testing and refining solutions.

At this point you may be asking where the ‘traditional’ experts are in this process. We’d say co-design does not represent the rejection of expertise but the reimagining, repositioning and redistribution of expertise. If done well, it can help to address the problems, which Darrin Durant raised, of defining one type of expertise as bad and another closing of participation by technical fiat.

In a co-design process, end users are recognised as having their own expertise to bring to the table. Experts, in the traditional sense of the term, can be involved in the design process and help to refine the model based on evidence. Practitioners – whose own knowledge has sometimes been negated in the academic literature as Brian Head pointed out – can also contribute their practical knowledge of what’s needed and what works and what doesn’t.

This may be best illustrated by a case example. In a recent project with Amaze, the autism peak body in Victoria, we used a modified co-design approach to bring key stakeholders together to iteratively develop a strategy to improve educational and social outcomes for students with autism in the school system. This is certainly one of the complex issues about which, as Col Wight noted, there are always multiple perspectives. A co-design approach enabled us to recognise that and start to build a shared understanding among a group that included people with autism who provide peer support in schools, teacher and principal representatives, support staff and peak organisations.

We began by developing a root cause analysis. This is an analytical tool to identify the causal pathways that lead to a specific problem. The aim is to work back along each causal pathway toward the ‘root causes’ of the problem, so that these can be addressed. Sounds like a technical process, which one of my audience members pointed out, but actually it begins with a whiteboard and marker and a conversation – asking individuals what they know about what underlies the issues they see.

To make sure we captured the range of perspectives on the system, we began with individual interviews with each stakeholder to draw their own map. We supplemented this with a scan of the literature and a review of the student experiences in the school system – identified in the Victorian Parliamentary Inquiry into services for people with Autism Spectrum Disorder. We then combined individual maps into a comprehensive map of the causal pathways to the problem and refined this iteratively with stakeholders through two workshops. Through discussion in a small group, stakeholders were able to understand each other’s legitimate perspectives.

Once we reached this shared understanding, stakeholders used the map to identify key points at which to intervene and the priority elements of a strategy to holistically address the problem.

From there, we worked together to develop a logic model and evaluation framework for the strategy. Again, these are technical concepts, but they can be cracked open through capacity building workshops, and doing so can build a shared understanding of what is being done and why.

In other projects, we and our clients are using co-design with people with dementia and with people who have a personal experience of anxiety, depression or suicide, or support someone who does.

Co-design might not suit every situation – and certainly not ones in which there is a predefined model – but we believe it has a lot of potential to enable participants to understand each other’s truths, break down binary thinking and collaboratively build solutions.

Thanks to the Melbourne School of Government for a thought provoking few days.


Marrying evaluation and design for use

Aricle Image for Marrying evaluation and design for use

February 2018

By Melanie Darvodelsky

We love partnering with people who share our passion for supporting positive change. So we’re excited to be partnering with Jax Wechsler from Sticky Design Studios and Amelia Loye from engage2 in our evaluation of beyondblue’s blueVoices program, which brings together people who have a personal experience of anxiety, depression or suicide, or support someone who does, to inform policies and practice.

Marrying design, engagement and evaluation expertise will enable us to provide not only evaluation findings, but a clear direction for the future, which is backed by both the organisation and blueVoices members and supports our commitment to utilisation-focused evaluation.

As Jax explained at a workshop with our Sydney staff, co-design is not just running a stakeholder workshop. Design is iterative. It involves prototyping, testing and refining. Co-design is an approach to design that actively identifies and addresses the needs of all stakeholders in the process to help support an end product that is useful across the board.

When designing services, if you skip the vital step of conducting research to understand the world from the end-user’s perspective, what you come up with may be inappropriate and not deliver the possible value it could.  

Additionally, service design does not stop in the way that product design does. Implementation is ongoing and involves many people working together over periods of time. An idea for a tool that meets staff needs at the beginning of a project may no longer be useful even by the time the tool is fully developed, as both the project and staff involved may have moved on. So designers need to think about how their work can support an ongoing change process, if they want to make sustainable impact.

Through her research and project experiences, Jax has found that designers can support lasting change in contexts of innovation through ‘artefacts’ – visual representations and models. These include personas, journey maps, infographics, flow charts and videos. Artefacts act in a ‘scaffolding’ role for a program or organisation, for example, by persuading staff about why a change is needed, facilitating empathy between stakeholder groups, and providing a tool for sense-making. Artefacts – as ‘boundary objects’ – can also support staff from different disciplines to bridge the different languages they speak and collaborate, empowering them to co-deliver change.

You can read and watch more about Jax on her website or come to Social Design Sydney on Monday, 5 March 2018 from 6:00 pm to 8:30 pm in Ultimo to discuss whether co-design is the silver bullet we hope for. Register here.


Stretching your interview skills

Aricle Image for Stretching your interview skills

February 2018

By Partner, Jade Maloney, and Consultant, Maria Koleth

Interviews and focus groups allow you to gather in-depth data on people’s experiences and understand what underlies the patterns in quantitative data. However, handling dominant voices and opening up space for divergent views and quiet types in focus groups can pose challenges for even experienced researchers. Recently, Partner, Jade Maloney, facilitated a workshop with researchers from the Australian Human Rights Commission to reflect on their practice and stretch their skills through scenario-based activities.

Here are our top five tips for successful interviews and focus groups:

  • Choose the right method for the information you need: While individual interviews are generally best when the subject matter is sensitive or you are interested in individual experiences, focus groups are great for capturing group dynamics and experiences. However, there’s also a need for pragmatism. If resourcing and time constraints prevent you from undertaking individual interviews, you can make focus groups work by specifically targeting your questions.
  • Start out well: How you start can make all the difference to how well an interview or focus group goes. Explain who you are and what your research is about. Let them ask you questions; you’re about to ask them a lot! In a group, establishing rules can set the foundation for positive interaction and provide you a reference point to return to if issues arise. Some key rules are making clear that there are no right or wrong answers, that we want to hear from everyone, that we should refrain from judging others’ points of view, and that we need to respect the confidence of the group.
  • Use a competency framework: Facilitators can use a competency framework to prepare for, rate and reflect on their skills and experience in focus groups and interviews. The ARTD competency framework, built over years of practice, specifies general competencies (e.g. being respectful and non-judgemental), competencies displayed during the interview, (giving space and focusing), and higher-order skills (group management and opening up alternatives).   
  • Play out scenarios: Despite the cliché that ‘nobody likes roleplays’, playing challenging interview and focus group situations can be a great way to try out different responses to tough situations you have come up against, so you can approach them differently next time, or to prepare for potentially challenging focus groups. It can also be fun! Thanks to Viv McWaters and Johnnie Moore from Creative Facilitation, we’ve learned that it helps to whittle a scenario down to a line and use a rapid fire approach to test responses, and then to reflect on the experience. Scenario testing can help interviewers get into the head of their interviewees. It’s always important to remember that there’s no right or wrong when it comes to testing scenarios and that something that works in one research situation might not work again.
  • Find time to reflect: With the quick turn-around times and demanding reporting requirements of applied research environments it can be difficult to take the time to systematically reflect as a team. Setting up both informal and formal opportunities for reflection on qualitative practice can help team members learn from each other’s wealth of experience.  

 Want to learn more? Speak to us about out interviewing skills workshops on 9373 9900


Beyond programs? Is principles-focused evaluation what you’re looking for?

Aricle Image for Beyond programs? Is principles-focused evaluation what you’re looking for?

January 2018

By Jade Maloney, Partner

For several years now, I’ve been getting more and more involved in service design, review and reconceptualization to respond to evolutions in the evidence base and the systems within which services operate. And, when I am designing an evaluation framework and strategy or conducting an evaluation, I tend not to be looking at programs, but at services that are operating within larger ecosystems, aiming to complement and to change other aspects of these systems in order to better support individuals and communities.

This isn’t surprising given that I am working in the Australian disability sector, which is currently undergoing significant transformation in the transition to the National Disability Insurance Scheme (NDIS). Programs are giving way to individualised funding plans that provide people with reasonable and necessary supports to achieve their goals. The future is person- rather than program-centred.

When designing and reconceptualising services in this context, it has been more feasible and appropriate to identify guiding principles, grounded in evidence, rather than prescriptive service models or 'best practice'.

But what happens when evaluating in this context, given that evaluation has traditionally been based around programs?

Fortunately, well-known evaluation theorist Michael Quinn Patton has been thinking this through. Evaluators, he has realised, are now often confronted with interventions into complex adaptive systems and principle driven approaches, rather than programs with clear and measurable goals. In this context, a principles-focused evaluation approach may be appropriate.

As Patton explained in a recent webinar for the Tamarack Institute, principles-focused evaluation is an outgrowth of developmental evaluation, which he conceived as an approach to evaluating social interventions in complex and adaptive systems.

In a principles-focused evaluation, principles become the evaluand. Evaluators consider whether the identified principle/s are meaningful to the people they are supposed to guide, adhered to in practice, and support desired results.

These are important questions because the way some principles are constructed means they fail to provide clear guidance for behaviour, and because there can be a gap between rhetoric and reality. Patton has established the GUIDE framework so evaluators can determine whether identified principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E).

I’m now looking forward to reading the books, so I can start using this approach more explicitly in my practice.


Building capacity for evaluative thinking

Aricle Image for Building capacity for evaluative thinking

January 2018

By Jade Maloney, Partner

I reckon the right time to make resolutions isn't amidst the buzz of New Year's Eve, but when the fireworks are a dim echo.

So here goes. This year, I'm committing to championing and building capacity for evaluative thinking.

If we're to believe the hype that we're living in a post truth world, this may seem like a lost cause. But while many people source their information through the echo chambers of social media, we can take comfort that the Orwellian concept of alternative facts hasn't caught on.

Also in our work in evaluation, we come across plenty of organisations and stakeholders with a commitment to collecting, reviewing and making decisions based on evidence. While there is often a gap between rhetoric and practice, evidence based (or at least evidence informed) policy is engrained in the lexicon of Western democracies.

The trouble is that evidence informed decision making can seem out of reach if evaluation is presented, in difficult to decipher jargon, as the remit of independent experts. (Of course, this is not the only trouble. In some cases it is that the commitment to evidence and evaluation is symbolic—to give an impression of legitimacy—but that's not the situation I'm thinking of here or one that I come across very often).

This is not to say that there is not real expertise involved in evaluation. But if we can't translate this into language and ways of working that all stakeholders can understand, and then bring them along on the journey, evaluators will be speaking into their own echo chamber.

And—as is clear from the literature on evaluation use (including my own study with Australasian Evaluation Society members)—if we don't involve stakeholders throughout an evaluation, then it's unlikely to be used either instrumentally or conceptually.

Focusing on building capacity to think evaluatively (rather than just capacity for evaluation) can help put informed decision making within reach.

This focus fits with the concept of process use (see Schwandt, 2015), which evidence shows can be linked to direct instrumental use of evaluation. It also supports sustainable outcomes from interactions between evaluators and stakeholders.

But what does building capacity for evaluative thinking mean in practice? For me, it means not only focusing on the task of the evaluation at hand or building capacity for evaluation activities, such as developing program logics and outcomes frameworks, but on engaging stakeholders in the practice of critical thinking that underlies evaluation.

As Schwandt (2015) describes it, critical thinking is a cognitive process as well as a set of dispositions, including being 'inquisitive, self-informed, trustful of reason, open- and fair-minded, flexible, honest in facing personal biases, willing to reconsider, diligent in pursuing relevant information, and persistent in seeking results that are as precise as the subject and circumstances of inquiry permit.' And its key application in evaluative practice is in weighing up the evidence and making value judgements.

We can crack open this process by engaging stakeholders in it. We can also translate the process into an equivalent in everyday life (for example, using value criteria, such as price convenience, quality and ambience, to make a reasoned choice between different restaurants).  This might even help people to understand how others come to different conclusions based on different value criteria.

The more often this happens, the less we may need to worry about echo chambers.