News & Blog

Masterminding an evaluation approach

Aricle Image for Masterminding an evaluation approach

July 2018

By Ken Fullerton

Conducting a program or policy evaluation is not always a straightforward process. But bringing stakeholders together to discuss and reflect on the best approaches to tackling evaluation challenges can really help.

This is what happened on Thursday 28 June, when evaluators, non-evaluators and other industry stakeholders attended the latest AES Learning Lab in Sydney, focused on Real-Life Evaluation Challenges. Placed into small groups, attendees were asked to dive deeper into a selection of real-life evaluation challenges that they, their organisation or others, are currently experiencing, using the Mastermind approach.

What’s the Mastermind approach?

The Mastermind approach involves one group member briefly explaining their evaluation challenge in 5 to 10 minutes. This might be as simple as seeking clarity on a small aspect of a larger evaluation or as complex as questioning the overall approach. Other members are then encouraged to probe, ask informative questions and make suggestions based on their own experiences, interests and knowledge. Suggestions may be low or high cost––a recommendation to contact ‘person x’ or ‘organisation y’ might be as beneficial as one to make use of a completely new evaluation approach. 

While there is no obligation to act on anything put forward, the Mastermind approach aims to open people’s eyes to things they hadn’t considered or connect them to new networks and resources.

What were the results?

For one participant, the process reinforced the value and worth of the approaches and methods that they were already using. Learning that other evaluation professionals would go about things in the same or similar way can be reassuring, given the complexities and challenges involved in evaluation.

Another participant was surprised at the variety of responses and suggestions. Some, she and her organisation had considered, while others were completely new, such as exploring the possibility of engaging further with the AES’s network to increase survey and interview response rates.

Others found the Mastermind approach helped them to think ahead about potential challenges and identify some solutions, such as thinking about program design and data collection early on. This led to further group questions about whether any data collected would be relevant for different parts of the program or if different types of data were required.

The flexibility of the approach appealed to many, as it can be used at any stage of an evaluation, across all sectors and by all levels of staff. It could just as easily take place in workplace kitchen chat as it could in a formal workshop environment.  

What’s the use?

At ARTD, we are big supporters of reflective practice and keen to make use of the Mastermind approach. Sharing your challenges and being open to feedback can lead to innovative, practical or useful suggestions that could improve your work. Understanding why things have not worked as expected can also be more powerful than learning from success (read more on how sharing failures strengthens evaluation).

The next AES Learning Lab session will be on ‘Evaluation in complex settings: taking a systems approach with an eye to the United Nation’s Sustainable Development Goals’ and will take place in Sydney’s CBD on 26 July 2018.


Where to next for behavioural insights?

Aricle Image for Where to next for behavioural insights?

July 2018

By Jade Maloney

Behavioural insights have risen rapidly on the agenda of governments around Australia and the world. This year, there are just over 200 nudge units across the world.

Now that “nudge” units have been integrated into the machinery of government, behavioural economists are setting their sights on new frontiers. The buzz at Behavioural Exchange 2018, held in Sydney on 25 and 26 June, was all about where behavioural insights was headed next.

Top of the list were tackling more complex social problems, harnessing new technologies and machine learning, increasing interdisciplinary collaboration and scaling.

Can algorithms be accountable?

Artificial intelligence and algorithms have potential to assist governments in addressing complex problems. For example, the UK’s Behavioural Insights Team has used tech-based approaches to analyse social worker case notes and outcomes to understand factors that indicate a need for intervention. Combined with discussions with experienced social workers, this is informing training for new social workers.  

Australia’s own Data 61 at CSIRO is exploring the use of integrated datasets, machine learning and the potential for personalisation. While Stats NZ have an Integrated Data Infrastructure that allows data linkage across agencies, which many an Australian researcher would love to access.

If you’re wondering about the ethical dilemmas these new approaches give rise to, Bill Simpson-Young, Director of Engineering and Design at CSIRO, set out five handy principles for accountable algorithms: Responsibility (i.e. a human is responsible), Explainability, Auditability, Accuracy and Fairness.

When an audience member asked about government’s responsibility to share the results of integrated data analysis, such as that undertaken by Stats NZ to identify key factors in childhood that relate to poorer outcomes in adulthood, Michael Sanders of the UK’s Behavioural Insights Team raised the need to ensure that analyses do not reinforce negative expectations and create self-fulfilling prophecies (e.g. by telling people that they fit the criteria for poor life outcomes).

Behavioural insights and design thinking – best friends or barely able to relate?

Not to neglect the other big buzzword in government these days, the conference also explored synergies between co-design and behavioural insights.

Nina Terrey of Thinkplace set out the different foundations of design thinking and behavioural economics:

  • worldviews (social constructionist vs logical positivism)
  • approaches to problem solving (abductive vs inductive and deductive)
  • processes (participatory and dialectic vs expert collaboration)
  • approach to systems (system disruption and collaborative generation of solutions vs identifying ways to make existing systems work more efficiently and effectively).

Sometimes these differences can give rise to tensions. But the disciplines have been able to forge partnerships because both enable deeper learning and designing of solutions to complex policy problems.

Dilip Soman, Professor at the University of Toronto's Rotman School of Management, suggested the two are two sides of the same coin – both begin with empathy.

Who are the stakeholders?

Both behavioural insights and design thinking have a focus on understanding stakeholders’ perspectives (albeit in different ways), so it’s unsurprising that consultation was on the agenda. Martin Parkinson, Secretary of the Department of the Prime Minister and Cabinet, kicked off the conference by calling on the public service to make better use of evidence and to learn from failure. He suggested that almost all problems in policy arise because we haven’t thought about all of the stakeholders – not only the end users, but the decision makers and practitioners.  

Cass Sunstein, Professor at Harvard University, and one of the key authors in behavioural insights similarly identified public consultation as important to informing government decision-making, not an exercise in legitimation.

Where was evaluation?

There were plenty of references to randomised control trials (RCTs and debate about what constitutes evidence. Professor in Economics at the University of Chicago, John A List proposed that 3-4 well-powered, independent RCT replications are required before scaling up an intervention. Deputy Secretary, Economic at the Department of the Prime Minister and Cabinet, David Gruen agreed on the importance of evidence but noted the need for timely action and that “the truth is only one special interest group in policymaking and not particularly well funded.”

But there was little reference to the broader discipline known as evaluation – and its range of approaches to answering different questions in different contexts. For the relatively straightforward kind of interventions and the kinds of questions behavioural insights trials have engaged with to date, the focus on RCTs has been appropriate. But will RCTs have the answers, as behavioural insights moves to tackle more complex problems in complex dynamic systems?

Our experience in evaluation suggests a broader repertoire of approaches and engagement with system dynamics will be needed. Evaluation has long acknowledged and frequently included economic evaluation in its repertoire. Will behavioural insights start to draw more on evaluation expertise? 

Here’s hoping a deeper relationship between evaluation and behavioural insights practitioners will be one of the new frontiers.


Sharing failures strengthens evaluation

Aricle Image for Sharing failures strengthens evaluation

June 2018

By Gerard Atkinson and Melanie Darvodelsky

We all like to share success stories, but the fact is that we can learn just as much – and sometimes more – from talking about when things don’t go as planned.

This was the subject of the Australasian Evaluation Society NSW event on Wednesday May 30, “Learning from evaluation failures”. The event was run by two experienced evaluators who each shared a previous case of evaluation “failure”, where the client had difficulty accepting the findings of the evaluation.

The cases

Case 1: The evaluator found out that the number of participants who transitioned from institutional care into a support program was zero. When the evaluator presented this finding at the final meeting, the client questioned its accuracy.

Case 2: The evaluator worked with the client from the beginning to identify and agree on which data would be used to measure outcomes, but as the project progressed, the client seemed to value their internal data over external sources. At the close of the project, the evaluator pointed to external data to say that the program objectives were not met. However, the client disagreed and used their internal data to hold to their view.

Evaluators at the session formed small groups to discuss “What could the evaluator have done to prevent or minimise this negative result?”

What could have been done differently?

Gaining acceptance of and action on negative findings is tough. This is unsurprising given the evidence that people tend to accept information confirming their views and refute information that challenges their views.

The key issue identified from both cases was a need to bring people along on the evaluation journey. It appeared that in the first example, the evaluator operated alone, which may have exacerbated the negative reaction at the close of the project. In the second case, the evaluator and client did not stay on the same journey despite their initial agreement. Working in and maintaining partnership with stakeholders is an effective way to prepare them for and ease their acceptance of negative findings, as well as increase their sense of ownership for the project and the next steps needed to create change.

Evaluators identified a range of practical ways to work in partnership with these stakeholders that may have led to more positive project outcomes.

  • Communicate regularly and proactively throughout—this can range from formal check-in meetings to an informal understanding to communicate key information as it comes to light. What is important is that there is a shared awareness of the methods being used and key results as they emerge.
  • Engage and get endorsement of primary users—engaging senior management decision-makers, seeking to understand their expectations about outcomes, and gaining their endorsement of at the outset can help to reduce risks.
  • Understand the context—a key element of utilisation-focused evaluation is an appreciation of the context (political and programmatic) in which an evaluation takes place. The priorities, needs, and preconceived expectations of stakeholders can shape how an evaluation is developed and ultimately accepted. Even with regular and proactive communication, if the program team has a vested interest in the evaluation producing positive results, negative findings can create friction.
  • Re-frame negative findings—framing negative or contradictory findings as lessons and opportunities for improvement can help pave a way forward.
  • Identify the potential for negative findings at the outset—it is just as important to ask clients what failure would mean and how they would respond as it is to ask what success would mean. This helps to identify expectations, enabling you to frame how you communicate activities and results so that stakeholders feel part of the journey, and are empowered to make changes as a result.

These strategies fit with the findings ARTD Partner, Jade Maloney’s research on evaluation use. However, Maloney’s research also identified that these strategies can fail when working with organisations that lack a learning culture and when findings are politically unpalatable.

The strategies also align with Michael Quinn Patton’s Utilisation-Focused Evaluation. Quinn Patton’s approach provides a framework for evaluators to maximise the intended use of evaluations by users, even where the results of an evaluation may not match what program staff or management expected.

Let’s keep sharing

Evaluators’ candour in telling their stories and allowing other evaluators to consider how we can collectively achieve greater use of evaluations is a positive contribution to evaluation practice. It builds on the growing conversations in the field, such as those seen at the AES 2017 conference in Canberra, and in Kylie Hutchinson’s recent book “Evaluation Failures”.

We’re keen to continue the conversation – this year’s AES Conference will be a great opportunity.

Resources

Hutchinson, K., “Evaluation Failures”, 2018

Patton, M., “Utilization-Focused Evaluation”, 4th Ed., 2008

Patton, M., “Utilization-Focused Evaluation (U-FE) Checklist”, 2013

Ramirez, R., and Brodhead, D., “Utilisation focused evaluation: a primer for evaluators”, 2013


How can governments harness citizen input in decision-making?

Aricle Image for How can governments harness citizen input in decision-making?

June 2018

By Ruby Leahy Gatfield

Engage2’s panel discussion in Sydney this week left us asking important questions about how governments can use new tools to better engage and listen to citizens, and importantly, how we can measure the impact of public engagement activities on government decision-making. 

The Vivid Ideas event, Democracy is being Disrupted: Governing in the 21st Century, brought together leading experts in democracy building and community engagement to discuss what participation in a representative democracy should look like and the many new tools and methodologies available for building stronger democracies.

Tom Burton, publisher for The Mandarin, opened the event by asking how our institutions are and should be responding to the global phenomena of democratic disruption. 

A burning platform

Alan Dupont, CEO of the Cognoscenti Group and political strategist, then warmed up the panel by creating a “burning platform” to spur change, as asked by Engage2 Managing Director, Amelia Loye.

Dupont explained that a recent rise in populism and democratic backsliding in countries around the world have led everyday people, particularly young people, to question the value of democracy. He went on to outline five causes of democratic disruption that he has observed:

  1. Macro-system instability and the demise of the Pax Americana
  2. Digitalisation—the rise of IT and social media, which have both facilitated democratisation and shone a light on our institutions’ imperfections
  3. Rising inequality—which has created distrust in government and the value of democracy
  4. Increased unregulated migration—which has divided public debate and led to civil unrest and disenfranchisement
  5. Urbanisation and population growth.

Dupont concluded that “while democracy is being disrupted, it is not beyond repair”. This echoes International IDEA’s recent research, The Global State of Democracy, which found that, since 1975, the world has experienced significant democratic progress – particularly in terms of clean and fair elections, respect for human rights, checks and balances on government and citizen engagement. However, this progress has slowed significantly over the past decade. The report concluded that we are now at a crossroads and need to adapt our processes and institutions to safeguard democracy.

How can we better engage citizens?

Dupont’s remarks were followed by a lively panel discussion between Burton; Dupont; Loye; Elizabeth Tydd, NSW Information Commissioner and CEO of the Information and Privacy Commission NSW; Daryl Karp, Australian Museum of Democracy; Iain Walker, Executive Director of the New Democracy Foundation; and Jamie Skella, Co-founder of Horizon State.

The discussion highlighted the need for governments to not only engage the disengaged, but to genuinely listen to and deliberate with the many Australians who are engaged, but don’t feel they can influence government decision-making or policies.

This is particularly important in the context of the Open Government Partnership, an international initiative to ‘secure a commitment from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance’. Australia became a member of the Partnership in 2015 with the launch of its first National Action Plan.

To help honour our commitment to the Partnership and strengthen our democracy more broadly, the panel discussed many emerging technologies and methodologies for effective citizen engagement. These range from sophisticated artificial intelligence and data mining techniques to analyse qualitative feedback, through blockchain voting technology, to face-to-face methods. Face-to-face methods can enable people to deeply engage, exchange view points and build shared understandings.

When engaging citizens, they stressed the importance of reaching representative samples; breaking down information for people to digest and thoughtfully consider; asking open-ended consultation questions without predetermined policy responses; using co-design as a genuine method (not a buzzword); and using a multifaceted approach—both online and face-to-face. Tydd observed that local councils often engage particularly well through on-the-ground consultations.

Measuring the impact of engagement

As monitoring and evaluation specialists, the panel left us asking some fundamental questions about how we can measure the impact of citizen engagement activities.

To what extent are governments effectively engaging citizens? Is the feedback collected in consultations publicly reported? How do we know whether this feedback is influencing decision-making and policy design? And where it isn’t, is the rationale communicated?

We know from experience that feeding back how data is used is key to effective, ongoing engagement. So, it is critical that agencies monitor and evaluate their engagement activities to answer these questions, be transparent and accountable, and ultimately, build greater trust in democratic processes.

Stay tuned as we set out to answer some of these questions in upcoming research.


The many uses of theory in evaluation

Aricle Image for The many uses of theory in evaluation

June 2018

By Alexandra Ellinson and Jade Maloney

The word ‘theory’ is often bandied about by evaluators. But they’re not all talking about the same thing. And some authors on evaluation don’t even think theory is necessary for evaluation.

Like our Senior Manager and former evaluation lecturer, Brad Astbury, we think theory is useful and can be used in evaluation in multiple ways.

Donaldson and Lipsey in The Handbook of Evaluation identify three broad types of theory in evaluation.

  1. Evaluation theory or theories of evaluation practice. These are the ideas about what evaluation is and how it is practiced (descriptive theories) and/ or ideas about what evaluation should be and how it should be practiced (normative theories).
  2. Social or social science theories: These are the explanatory frameworks that draw on research evidence about how people usually behave or how systems or organisations function, generally or in specific contexts. These are important because in evaluation we are often considering how an intervention affects individual behaviour change or systems change.
  3. Program theory: These are theories about the specific program, intervention or policy being evaluated, which aim to present a plausible account of how and why a program is expected to work.

While it’s helpful to clarify these different uses of the term ‘theory’, at ARTD we tend to prefer the term ‘approaches’ when referring to descriptive accounts of what evaluation is or normative accounts of what it should be. This is because these ideas (e.g. participatory or utilisation-focused evaluation) are not providing causal explanations—something that we take to be an essential feature of a theory—but rather are either positing assumptions about the ontology of evaluation (what it is), its epistemology (how we can be confident that evaluative claims are accurate), and/or are stating principles for good evaluation practice. Even theory-based approaches to evaluation are not themselves evaluation theories, but rather a certain way of doing evaluations, one that commences with a set of (hopefully evidence-based) assumptions about the nature of the thing being evaluated and how the intervention is expected to cause outcomes.

Terminology aside, it is very useful to articulate one’s approach to evaluation, not only to ensure that it is consistent and coherent, but also to build a shared understanding about the approach with a client or stakeholders (especially if their close engagement in the evaluation is required). It is also part of fostering a community of professional practice. 

We also think that evaluations can and should draw more on social science theories. While it is common for evaluations to involve a review of literature related to the content area of the program, there is not always a systematic approach to the application of existing knowledge in the field to evaluation.

Increasingly, we are looking at how social science theory can feed into program theory. We examine the evidence about programs that are built on similar bodies of psychological or social research. This helps us assess the program’s evidence base, identify adjustments and helps us focus our evaluative effort on the gaps in the evidence base.

We also consider ‘negative program theory’ to identify how an intervention could potentially possibly result in an opposite or negative outcome. And draw on the research to manage expectations about the timeframe in which outcomes may be observed.

So what does this look like in practice?

In an evaluation of a peer support program for students with disability and mental health issues, we drew on the evidence base about how peer support models work to empower participants. We also considered how the program could result in social exclusion and increased anxiety rather than social inclusion, and identified how this risk would be managed in the program design.

In an evaluation of an intervention designed to reduce antisocial behaviour, we drew on criminological literature about deterrence effects as well as emerging psychological evidence about what makes people more responsive to regulation. By doing this, we could explain the pattern of outcomes and make recommendations about how to better target the policy to people for whom deterrence is most likely to be effective, while minimising potentially negative unintended consequences.

In an evaluation of an intervention to support victims of crime, we combined social science and realist theory. The diagram at the top of the page provides a simplified overview.

We are keen to hear from other evaluators about how they use 'theory' and reassured by the recent turnout to Brad Astbury's AES on theories of evaluation. 


Agile evaluation, can there be such a thing?

Aricle Image for Agile evaluation, can there be such a thing?

May 2018

By Alexandra Lorigan

‘Agile’ has emerged as one of the latest buzzwords in Government Departments. But what does it mean to be agile, and can we do more agile evaluation?

On Wednesday 2 May, Florent Gomez from the NSW Department of Finance and ARTD Partner Jade Maloney delivered a free lunchtime seminar in Sydney organised by the Australasian Evaluation Society (AES).

Drawing on ideas from an article by Caroline Heider at the World Bank, Florent introduced the Agile project management methodology, and participants then discussed how this could apply to evaluation, if at all. 

So, what does it mean to be ‘agile’ and is there a place for it in evaluation? 

Agile originated in the IT world as a project management methodology that uses short development cycles. In 2001, 17 software developers formalised the approach into a set of 12 key principles known as the Agile Manifesto. From the first two principles, it is clear that at its core, the Agile approach is customer-centred, deeply collaborative and constantly adapting. This is in contrast to the traditional ‘waterfall’ approach to IT project management, where the project plan is designed at the outset and then followed in sequence, with little flexibility for customer input.

Another key aspect of the Agile approach is the commitment to speed and efficiency, which Moira Alexander highlights in her article, ‘Agile project management: A comprehensive guide’. According to Alexander, the desire for rapid adaptation and optimal design, requires both simplicity and a high level of self-organisation and accountability by teams.

Though the Agile methodology originated in the software industry and continues to boast an adoption rate of 23%, it has since been used by a number of other key industries. Among these is government, which uses the methodology on roughly 5% of its projects. 

Florent became interested in the concept when he joined a government agency, where many projects were delivered based on the Agile methodology and the ‘A’ word was heard everywhere. In his new role as internal Evaluation Manager, the expectation was also to evaluate these projects in an agile way, meaning, within very short timeframes.

In her article, Caroline Heider suggests how the concept of agile could be translated to evaluation practice. In addition to narrowing the scope of the project, she suggested evaluation could be more agile, or efficient, by drawing on:

  • existing and standardised datasets
  • algorithms for collecting, organising and presenting data
  • electronic data collection methods, such as online surveys
  • effective project management skills and tools.

Interestingly, most participants agreed that Heider’s suggested approaches to shortening response times are already widely practiced in the evaluation world.

So do AES members see the potential for evaluation to be more agile?

With this grounding, it was turned over to us evaluators to envisage whether we could see the potential for more agility in our work. Specifically, we were asked to consider the benefits, enablers and risks to evaluation being more agile.

Participants agreed that by being more agile, we could make evaluation more focussed, responsive, creative and ultimately, produce more useful products. However, they acknowledged that being able to make evaluation agile would depend on:

  • having the necessary IT systems and the skills to use them
  • whether the project needs ethics approval, which would limit any potential to change processes
  • the level of buy-in and engagement of clients in this particular approach
  • having the right structures and processes in place to facilitate such flexibility.

While Heider’s primary message was that there are ways to make evaluation more agile, both she and AES members acknowledged the risk of quality loss. Participants expressed fears that agile had the potential to become too ‘quick and dirty’ to produce meaningful results. They noted that evaluators may risk avoiding slow, but often necessary, methods of data collection in favour of faster, possibly unsuitable, methods. Additionally, participants identified the risk of both scope creep, which affects project budgets, and scope narrowing, which could limit capacity to make well-informed recommendations.

So, where does that leave us?

The workshop generated useful discussion and allowed evaluators to consider how they be more agile without compromising the quality of their work. 

Participants identified clear synergies between Agile project management and developmental evaluation—developing a program in real-time through close consultation with program staff—and utilisation-focussed evaluation—conducting evaluations with a focus on intended use by end users.

As firm believers in our own ‘lunchtime learnings’, ARTD looks forward to attending more of these short and engaging lunchtime sessions in the future. You can visit the AES website for a full list of upcoming events.  


Using Evaluation Theory to Inform Practice

Aricle Image for Using Evaluation Theory to Inform Practice

May 2018

By Brad Astbury, Senior Manager

Evaluation is a young discipline, especially when compared to other fields of inquiry like sociology, economics and psychology. Even so, there exists a rich intellectual history and vibrant body of theoretical knowledge that continues to grow and evolve. When I was invited to deliver a workshop on ‘Theories of Evaluation’ for the Australasian Evaluation Society (AES) autumn intensive, there was one take-home message I wanted to convey—if evaluators are not tapping into the 60-year repository of hard-won lessons, they are missing out on considerable wise counsel. This ‘wise counsel’ can greatly improve the quality and utility of evaluation practice.

Without knowledge of evaluation theory, we are susceptible to repeating mistakes of the past and relying on little more than professional folklore and an assortment of methods with no guiding principles for their application.

The formal aims of the workshop were to provide participants with a better understanding of:

  • the nature and role of evaluation theory
  • major theorists and their contribution
  • approaches to classifying evaluation theories
  • key ways in which evaluation theorists differ and what this means for practice
  • dangers involved in relying too heavily on any one particular theory
  • techniques for selecting and combining theories based on situational analysis.

 As a passionate advocate of the late Will Shadish’s maxim ‘Evaluation theory is who we are’, I remain committed to the view that evaluation theory is central to our professional identity. Here’s some tough love from Shadish (1997):

If you do not know much about evaluation theory, you are not an evaluator…to be an evaluator, you need to know that knowledge base that makes the field unique. That unique knowledge base is evaluation theory (pp. 6-7).

So what exactly is evaluation theory? A broad answer is that evaluation theory is the body of writings about evaluation that have at their core a concern for practice. Another response is to view evaluation theory as a set of prescriptions about what constitutes ‘good’ evaluation and how it ought to be conducted (as detailed in alternative evaluation models or approaches).  Yet another perspective considers evaluation theory as comprising several meta-components including use, knowledge construction, valuing, social programming and practice. In my view, all three framings are important and should be considered within an integrated perspective on evaluation theory.

During the first part of the workshop, we examined these different ways of thinking about evaluation theory, drawing on a conceptualisation that I developed a few years ago to help postgraduate students circumnavigate the ‘theory jungle’. As part of the discussion, we compared the ‘big seven’ theorists: Donald Campbell, Michael Scriven, Robert Stake, Carol Weiss, Joseph Wholey, Peter Rossi and Lee Cronbach, as well as other ‘intellectual heroes’ that have found a place on Alkin’s infamous theory tree.

In the second part of the workshop, I presented an example of how a realist theory of evaluation can be useful for guiding practice, especially when the purpose of the evaluation is to support program improvement and transferability. I also offered some insights from Ernest House about how research on cognitive thinking and bias can prevent evaluators from making fundamental errors when drawing evaluative conclusions. As a group we brainstormed strategies to help determine when and where to use different evaluation theories and approaches, given considerations of time, budget, data, stage of program development, and so on.  There is no single or ideal theory of evaluation that will work always and everywhere. It is critical to select and combine approaches in response to situational contingencies.

In the final session, I emphasised the importance of evaluating evaluation theory and considered different criteria that might be useful for reflecting as a discipline on which theories of evaluation are ‘better’ than others. Should we continue to ‘let a thousand flowers bloom’ or is diversity of evaluation models and approaches leading to fragmentation of the field? My own view on this is that we need to get better at determining which theories of evaluation have merit and which are whimsical fads and fashions (or worse, harmful ‘snake oil’).  Evaluators need to reflect an evaluative gaze back on evaluation itself.

I invite readers of this blog to reflect deeply on two questions presented to participants during the workshop.

  • Who/ what has had the most influence on how you conceptualise and conduct evaluation?
  • What guides your current evaluation practice? 

As part of the reaction this exercise triggers, I hope that evaluation theory becomes an increasingly prominent resource that you draw upon to inform everyday practice and decision-making.

Thanks to the AES for hosting and organising the autumn professional learning program and to the many participants who attended workshops delivered over this inaugural three-day event. 


Knowing for whom and how is as important as knowing what you’re achieving

Aricle Image for Knowing for whom and how is as important as knowing what you’re achieving

April 2018

By Partner, Jade Maloney

Settlement Services International (SSI) is one of the largest providers of Ability Links – a NSW Government-funded initiative that empowers people with disability, their families and carers to work towards their goals by building on their strengths and connecting with their local communities, and supporting local community and mainstream organisations to become more inclusive.

SSI has Linkers in over 40 LGAs, and works in partnership with Uniting and St Vincent de Paul NSW in all metropolitan Family and Community Services Districts as well as in Illawarra/ Shoalhaven and Southern NSW. SSI’s delivery locations include the Local Government Areas (LGAs) with the largest CALD populations in NSW.

From the state-wide evaluation of Ability Links, SSI knew the initiative was achieving positive outcomes for individuals and a return on investment for the NSW Government. What they didn’t know was how they were supporting outcomes for the diverse individuals and communities they supported.

In late July 2017, SSI engaged ARTD to evaluate their delivery of Ability Links – with a focus on benchmarking their performance against the program as a whole and understanding how they were supporting outcomes and what improvements could be made.

To understand the ‘how’ underlying SSI’s outcomes for all of the individuals it was supporting, as well as individuals from CALD communities, we used a realist-informed approach – identifying theories with an evaluation steering group and testing and iteratively developing these through a series of interviews with Linkers employed by SSI and community organisations, and finally a participant reference group.

The state-wide evaluation had already engaged with people supported through Ability Links, and SSI was engaging with the people it was supporting to develop a book of their stories (published in Our Community: Stories of Courage Strength and Determination) and conduct a longitudinal Participant Wellbeing Study. So we had to be careful to make use of available data to avoid creating an additional data collection burden, while still putting people with disability at the centre of the evaluation – following the philosophy of ‘nothing about us without us’. 

We were able to do this with the participant reference group. With the assistance of two language interpreters, a physically accessible venue, and a discussion approach that was inclusive for people with a vision impairment – we were able to talk through, test and refine the emerging ‘theories’ about how SSI’s Ability Links supports outcomes with eight people who had accessed Ability Links, as well as identify improvements. This process helped to ensure the evaluation team interpreted the findings in context.

The evaluation identified a range of factors supporting positive outcomes, some of which were unique to SSI – such as their workforce of Linkers from diverse cultural and language backgrounds embedded within their communities – and some of which were tied to the flexible, person-centred and responsive nature of Ability Links. The evaluation also found that Linkers supported outcomes for individuals in varying ways – depending on their starting points, needs and goals. In some cases, people come with ideas and Linkers help to make these happen, while in others, Linkers help to turn people’s interests into ideas for community connections. Linkers can also build people’s confidence in varying ways: through the encouragement of a Linker or through social connections.

SSI is using the findings to inform its service delivery and recently shared its learnings at the DiverseAbility Conference. You can access a summary of the findings of the evaluation on SSI’s website.


Putting logic back into program logic

Aricle Image for Putting logic back into program logic

April 2018

By Consultant, Ken Fullerton

Are the programs logics that you see actually logical?

On Wednesday 11 April 2018, ARTD Partner Andrew Hawkins, delivered a free seminar in Sydney organised by the Australasian Evaluation Society (AES), which was attended by approximately 50 people.

Hawkins first briefly introduced his subject. His generalist definition of a program logic is “A one-page diagram or model of the important components of an intervention and how they work together to deliver outcomes.”

He then gave attendees three example program logics to use as references in their discussions of two key questions.

Question 1: What makes program logics logical?

  • They represent a narrative or ‘plausible story’.
  • There is an expectation that activities in a program logic will lead to particular outcomes.
  • They are not business plans and, due to limited space, do not map out every potential input, activity, outcome, etc.
  • They should be meaningful to various stakeholders.
  • They represent a thoughtful process or ‘journey’ describing how an organisation can go from Point A to Point Z, and any anywhere in between.
  • Different program logic formats, styles and colours may appeal to some but not to others (however, whichever styles are selected should be used thoughtfully).
  • Being aware of when to use a particular agency or organisation’s accepted template format so that the organisation itself can more easily understand the program logic.
  • Trying to be too logical in a program logic can actually hamper innovation.

Question 2: What do the arrows mean in a program logic?

  • They lead to or influence a particular activity or outcome.
  • Sometimes they seem to mean here’s “Where the magic happens” rather than a logical link.
  • They are assumptions about what needs to occur to allow an organisation to go from Point A to Point B.
  • They provide direction around expectations and what has to happen in an organisation.
  • They can indicate where an evidence base justifies a connection.
  • They represent sign-posts for the reader showing where the story starts and where one must look to next.

Later, Hawkins gave an overview of his own approach to program logic. He argued that a ‘configurationalist’ understanding of causality would be more useful than the ‘successonist’ understanding deployed in many program logics. His point was that effective programs are better thought of as a ‘causal package’ with certain assumptions, like a recipe, rather than a linear ‘causal chain’ where one element in the program logic causes the next one.

Hawkins believes a program is better thought of as a proposition or argument that a course of action will be sufficient to bring about a desired result, rather than a theory about change or a theory about action.  He said that while theory was very important for providing reasons, justifications or ‘warrants’ for elements of the program design (and for the program as a whole), he thinks it is too much for a program logic diagram to display this theory.

Instead, he proposed focusing on the conditions that program activities needed to achieve that, together, would be enough to enable an intended outcome to be achieved. He argued that this approach enables critical thinking (that can support realistic design) and evaluations focused on measuring outcomes a program is sufficient for achieving its objectives. It can also support the program’s argument or business case.


Governing the politics of evidence – Book review

Aricle Image for Governing the politics of evidence – Book review

April 2018

By Senior Manager, Alexandra Ellinson

The Politics of Evidence: From evidence-based policy to the good governance of evidence, Justin Parkhurst, Routledge Studies in Governance and Public Policy, 2017.

As evaluators, we want to generate credible evidence that is relevant and useful for informing public policy decisions that improve social outcomes. So, it’s unsurprising that Justin Parkhurst’s recently released book—or more specifically, it’s subtitle, ‘From evidence-based policy to the good governance of evidence’—caught my eye.

This book is a helpful reminder of the imperative that evidence-based policy making addresses the reality of politics head-on; and that this can (and should) be done without giving up on rigorous research. As Associate Professor at the London School of Economics and Political Science, Parkhurst shares in the concerns of both those who champion evidence in the face of the politicisation of science, and those who critique the de-politicisation of policy making that can occur when social values are obscured through the acceptance or promotion of only limited evidence sources and methods. In what follows, I briefly outline the key points and conceptual moves that Parkhurst makes in this book, and highlight what I take to be most relevant to policy makers and evaluators alike: his argument that there is a need for advisory systems that normatively embed the good use of evidence in policy making.

Part I opens with an exploration of what is meant by the oft-touted phrase, ‘evidence-based policy’. Parkhurst clearly unpacks the usefulness and limitations of framing policy making simply around the aim of ‘doing what works’. Starting from the premise that evidence matters a great deal for good policy, Parkhurst gives examples of what goes wrong when there is a lack of information or poorly used evidence e.g. the public health advice that babies should sleep on their fronts, which continued for decades despite mounting data about the dangers of this practice. Yet he also acknowledges the limits of technocratic cries for ‘more evidence!’ to inform policy decisions that are essentially principle or rights-based e.g. access to reproductive health choices or same-sex marriage provisions.

Parkhurst goes on to discuss the value of evidence from well-designed research methods with nous, and displays an appreciation of methodological pluralism when it comes to appropriately understanding social issues. This includes an important, but by no means preeminent, place for randomised controlled trials. He also gives a quick nod to realist evaluation by challenging readers to think about more than just ‘what works’, but also consider what works ‘for whom’ and ‘where’. While devotees of realist approaches might take issue with how he employs the term ‘mechanism’ in this discussion (given its very precise meaning in the field), they are likely to endorse the spirit of his argument.

Interestingly, Parkhurst doesn’t discuss or appear to make a distinction between evidence-based and evidence-informed policy: he primarily uses the first term, although at times the latter. While this seems unusual insofar as evidence-informed policy is often put forward as a more pragmatic and politically tuned-in alternative, I think this is also a smart move on his part. It works to avoid definitional debates and micro-arguments around the degree of political influence before something is described as ‘based in’ or ‘informed by’ evidence, but it also allows him to locate his concerns as part of a bigger picture that applies across cases.

Part II contains an interesting exploration of the role and functioning of various forms of bias that impact not only on what and how evidence is used by policy makers, but also the evidence that is funded and generated by researchers in the first place. Parkhurst discusses overt biases in pursuit of political interests as well as the subtle politics of cognitive biases. Perhaps most useful is a distinction that Parkhurst makes between ‘technical biases’ in the use of evidence and ‘issue biases’ in how evidence is deployed to inform political debates. In relation to technical biases, Parkhurst serves readers well by including both the invalid use of individual methods (i.e. poor scientific practice) and the failure to appropriately include relevant information from multiple sources. In relation to issue biases, he thoughtfully highlights questions around political legitimacy and representative politics in relation to how evidence is deployed to shift debates towards or away from issues/ areas of inquiry, often in a non-transparent way. 

On an initial reading, this discussion about biases in Part II is interesting but not obviously essential in contributing to Parkhurst’s argument. On a closer reading, however, it becomes clear that the inherent risk of biases—and hence the need to develop a systematic response to mitigate these dynamics—provides the rationale for why, in Part III, Parkhurst advocates for a governance approach to improving how evidence is used in policy making.

Part III is the most intriguing part of this book. Although it feels somewhat abstract and leaves the reader in want of more concrete examples, I suspect this is an artefact of his principles-based approach to institutional change—one that favours ‘guided evolutionary incrementalism’ rather than programmatic planning. The strength of the concluding chapters is how they challenge policy makers and evaluators to think critically and in new ways about how institutions, within and outside of governments, operate systemically to shape the evidence that is gathered and in turn how it is used in decision making.

Parkhurst starts by looking for principles that constitute good evidence beyond the well-worn technical hierarchies, and constructs a framework of appropriateness through which policy-relevant evidence might be considered. In doing so, Parkhurst’s defines ‘appropriate evidence’ as that which speaks to multiple concerns at stake in a policy decision, is constructed in ways that are most useful to achieve policy goals, and is applicable in the local policy context. In turn, ‘good evidence for policy’ comes to be defined as evidence that meets the aforementioned appropriateness criteria, as well as high-quality research standards.

Before presenting conclusions around the good governance principles for the use of evidence, Parkhurst next turns to a discussion on the factors necessary to ensure the democratic legitimacy of the ‘evidence advisory system’. While a vital discussion to be had, I’m unsure whether Parkhurst succeeds in making a new contribution to the long-standing debates and voluminous literature on the topic of whether (or to what extent) we can accept irrationality as the cost of democracy. With this issue parked, the book concludes by outlining key principles for the good governance of evidence.

Taking a broad understanding of good governance as the ‘art’ of systems and processes through which collective decisions are made, he argues that governance needs to be thought of in terms of both the processes and outcomes that are relevant to the use of evaluation with a policy process.

To do this, eight good governance principles for using evidence in policy making are elaborated

  1. appropriateness (the relevance of evidence to decisions or alignment to context)
  2. quality (the methods through which evidence was established)
  3. rigour (the comprehensiveness and synthesis of information)
  4. stewardship (the formal and public mandate of advisory systems)
  5. representation (that policy decisions about evidence lie with democratic representatives)
  6. transparency (that it is clear how decisions are made based on what evidence)
  7. deliberation (that there is public engagement around contested values that may affect what and how evidence is used)
  8. contestability (that the evidence used and decisions made are open to critique, peer review and scrutiny).

These principles appear sensible and provide a comprehensive response to the issues/ concerns raised throughout the book. Yet I can’t help but wonder whether Parkhurst has gone far enough and fully achieved a description of principles in a governance framework around the use of evidence: to my mind, the principles seem closer to what might underpin a quality assurance checklist. Parkhurst makes a brief reference to the use of governance in the corporate sphere but does not pursue this enquiry—arguably, this might be where some of the key lessons and fresh insights lie. This governance perspective would throw into focus a slightly different set of issues, such as the delegation of decision making (e.g. around the evidence that is gathered and how it is used); the strategies for managing risks (e.g. around the lack or inappropriate use of evidence); the duties owed to those with a ‘stake’ in the evidence (e.g. to the public or potential program beneficiaries, that decisions are made in their interests); and expectations around transparency and reporting (e.g. ‘rules’ safeguarding quality information and communication flows in deliberative processes).

Overall, the strength of Parkhurst’s argument is the explicit engagement with the political questions in determining what constitutes the better use of evidence. Doing this allows him to recognise that improvement requires building institutional arrangements (however incrementally this may be) that can address forms of bias while incorporating democratic representation. Readers may still want to know more about the ‘how’, and in the Australian and especially Canadian evaluation contexts one can’t help but be reminded of growing calls for an Evaluator-General to oversight the better use of evaluation evidence. These live debates highlight the importance of Parkhurst’s book in contributing to new ways of thinking about the twin problems of evaluation utilisation and the use of reason in democratic politics.

Accessible online: http://eprints.lse.ac.uk/68604/1/Parkhurst_The%20Politics%20of%20Evidence.pdf  


Growing specialist disability accommodation

Aricle Image for Growing specialist disability accommodation

March 2018

By Consultant, Ruby Leahy Gatfield

Ever considered establishing or investing in specialist disability accommodation, but not known where to begin? In 2017, ARTD worked with Frankston Peninsula Carers (FPC) to develop a series of resources offering practical advice on how community members can form an association and partner with housing and support providers and investors and funders, including local government, to grow accommodation options. We also showcased some of FPC's success stories to show what can be achieved!

The National Disability Insurance Agency (NDIA) recognises that some people with disability who have very high support needs or significant functional impairments will require specialist housing solutions. However, the current supply of specialist disability accommodation (SDA) is substantially below what is required to meet demand. Moreover, as parents caring for adult children with disability are ageing and reaching a point where they can no longer support their children at home, the need for specialist housing options is increasing. To address this, the NDIA SDA Payments aims to stimulate investment and increase innovative supply of housing, including housing models that foster inclusion and have improved design and technology to support people with disability to live independently.

However, these Payments are not intended to meet the housing needs of all people with disability and growth in the broader market of accessible and affordable housing options is critical. But how do interested families and communities even begin growing housing options in their local community? What are the key steps to consider? How has this been done successfully in the past? Where can they turn for investment?

FPC is one inspiring example of a community-based organisation established to address the needs for SDA in Victoria’s Mornington Peninsula. The organisation is run solely by volunteers; some of whom are carers of adults with disability. Since 2007, FPC has successfully sourced over $8 million in donations and government investment to increase housing options for people with disability in their local region.

Having received funding through the NDIS Sector Development Fund in 2017, FPC engaged ARTD Consultants to develop a series of resources to support families to establish housing projects and attract the interest of potential funders and developers. After consulting FPC’s Committee and key partners we developed:

  • a checklist for forming an organisation to grow SDA
  • a checklist for creating a SDA project from start to finish that answers frequently asked questions (such as where to find land, how to approach developers and investors, key design features and policies to be aware of, and how to run the property)
  • a template for a one-page project summary to present to stakeholders in early discussions
  • a template structure for a project proposal
  • a template and tips for developing a Memorandum of Understanding
  • a description of each of FPC’s housing projects to show interested community members what they can achieve when working together.

The resource package has been distributed to local members of parliament, Mayors, Ministers, housing providers, donors and families of people with disability and received positive feedback as a clear and useful tool. For more details and to download the resource package, visit the FPC website.

You can also support the work of FPC by donating to their current McCrae housing project – a well-located property designed to accommodate six people with varying disabilities. FPC has raised $35,000 so far and is is seeking to crowdsource $90,000 more to complete the property. You can read more and donate or share their crowdsourcing efforts here: https://donate.grassrootz.com/fpc/help-build-mccrae-house.


Why you shouldn’t hate a role play: stretching your interview skills

Aricle Image for Why you shouldn’t hate a role play: stretching your interview skills

March 2018

By Partner, Jade Maloney and Senior Consultant, Kerry Hart

Ever had an interviewee give monosyllabic answers or talk non-stop on an entirely different topic? What about someone who becomes overwhelmed by the conversation and perhaps even cries?

Over years of interviewing, we’ve encountered a range of challenging situations like these. Because people are people – with different values, beliefs and experiences, in different contexts – there are unfortunately not many hard and fast rules about how to respond (besides things like being authentic, respectful, and non-judgemental). This is what makes interviewing and focus group facilitation so challenging. But it’s also what makes it engaging, exciting and energising.

On April 22, the participants in our interviewing skills workshop for the Australasian Evaluation Society (AES) took turns role-playing challenging interview situations – testing out strategies to get an interview back on track and ensure interviewees feel comfortable and safe to share their views.

A reluctant participant (e.g. consistently gives one-word answers or shrugs their shoulders):

  • Get comfortable with silence. If you wait long enough (but not too long), the interviewee may step in to fill the void. It may also be that the person needed time to collect their thoughts before responding more fully.
  • Give them the reins. The person may feel your questions are not getting to what matters for them. Ask them what they’d like to tell you about.
  • Lighten the mood, be humorous, if the context is appropriate. This might break the tension.
  • Leave your specific questions aside; discuss a related topic. If the person is feeling uncomfortable with the interview situation, this can provide some time out to build trust.
  • Call the situation for what it is. Note that the person seems uncomfortable being there or that something seems to be bothering them. Give them an opening to share the thinking behind their behaviour. It may be important to understanding how the program or policy you’re reviewing is working.
  • It’s also important to recognise that sometimes, no strategies will work, and the person has a choice not to talk.
  • Be particularly careful when deciding to call on people who haven’t contributed in a focus group. There may be underlying group dynamics that you’re not aware of that are making this person feel uncomfortable sharing. Sometimes it can be better to catch the person on their own at the end of the group to see if they had anything to add

A tangential talker (e.g. needs to tell you about their key concerns before they can get into the interview questions; starts telling you about their entire career history when you ask them about their current role; or talks about all of their friendships when you asked them if they enjoyed a particular social activity):

  • If a person needs to get something off their chest at the start of an interview be respectful and listen. Sometimes taking an extra 10 minutes for this can mean the rest of the interview just flows.
  • Generally, don’t cut someone off in the middle of a sentence. But sometimes you may need to, particularly in a focus group where one person is dominating and others are feeling uncomfortable.
  • Recognise that sometimes a person on a tangent is actually trying to tell you something. After a while you may find them looping back to the topic at hand.
  • Tell that that you want to come back to something they said earlier (that was on topic), which was really interesting for the evaluation. This can be a good way to gently steer things back on track.
  • Tell them that you are conscious of their time and other commitments, but you want to make sure you capture their views on the key questions, so you would like to focus on these, so that you have their full input for the evaluation.

A person who becomes emotional or distressed:

  • If you think an interview context could potentially raise an emotional response, be prepared and prepare your interviewee. Let them know that some questions may be confronting, that they can choose not to answer and can take things at their own pace. Have contacts for supports you can refer to in place, if needed.
  • Give the person space. Ask if they’d like to take a breather or a longer break, come back to the interview at another time or end it there. Give them the choice. Don’t decide for them.
  • Lower your voice and slow things down.
  • Have boundaries, but remember you’re human. An emotional response can sometimes be appropriate when interviewing people experiencing life challenges.

Of course, there’s a need to understand the individual and the context in applying any strategies. The value of role playing isn’t that you’ll have perfected your exact response to any given situation. What it can do is help you to develop the agility to respond authentically and appropriately to the individuals and situations you encounter.

But, as one of our participants pointed out, the other value that stepping into an interviewee’s shoes provides is the ability to see things from their perspective. Having had this experience can make you pause next time you encounter a challenging situation and think what might be going on for the person. The person is probably trying to tell you something with their behaviour and non-verbal cues. Be attuned to this and open to their perspective.

Who said they hate role plays?

We really enjoyed our day running workshops on interview skills and questionnaire design for the AES. We also run tailored workshops for organisations. You can find out more by calling 02 9373 9900.


Taking a trauma-informed approach to research and evaluation

Aricle Image for Taking a trauma-informed approach to research and evaluation

March 2018

By Maria Koleth

Last week, all of ARTD’s staff attended a challenging and informative training day on Trauma-informed Care and Practice with the Blue Knot Foundation.  As a company that takes a strengths-based approach to research and evaluation with vulnerable populations, ensuring that our methods and instruments are trauma-informed is a key part of our process. The Blue Knot training renewed and updated our understanding of types of trauma, its long-term embodied effects, and the five principles of trauma-informed practice. Some key ways we are putting these principles into practice in our research include:

  • Prioritising safety: With a clear understanding that there is a widespread incidence of people impacted by trauma throughout the community, prioritising safety is an important part of our standard practice. Our interviewers and focus group facilitators establish clear boundaries around their role and the focus of discussions.  We also establish clear response protocols in case a participant becomes distressed, including referrals to other services if they need more support. The training also reminded us to continue to support our own staff who are vicariously exposed to stories of trauma and to ensure regular opportunities for them to debrief.   
  • Being trustworthy: We set clear expectations when getting consent from participants and then we ensure that we follow through on what we said we would do. Embodying trustworthiness also involves getting to appointments on time and staying within the boundaries of the research. This is important to establishing trust and a space in which people feel comfortable to share their story.
  • Offering choice: Maximising the control that evaluation and research participants have and being flexible in accommodating their needs is important for trauma-informed practice. We work to offer participants as many choices as possible, from where interviews take place to whether they would like a support person to attend with them to which questions they choose to answer.
  • Taking a collaborative approach: Increasingly, we have been using collaborative approaches. These help to address the unequal relationships between researchers and participants by making research projects a collaborative space.
  • Empowering participants: The theme of empowerment runs through many of our projects and the projects we evaluate. We recognise that people who have experienced trauma have often had their experiences minimised or invalidated in the past, so it’s important that we express trust in their responses in interviews and focus groups, recognise their resilience, and emphasise the importance of their contributions to projects.

We would also like to thank 107 Projects for hosting us –  their wonderful garden and sense of hospitality provided the most recuperative setting for our training day (see 107.org.au). 


Outcome Mapping: unpacking the black box between outputs and impacts

Aricle Image for Outcome Mapping: unpacking the black box between outputs and impacts

March 2018

By Ruby Leahy Gatfield

My recent internship at the International Institute for Democracy and Electoral Assistance (International IDEA) in Stockholm raised a number of important questions for me about how to monitor and evaluate international development programs. Trying to demonstrate a program’s ‘impact’ can often feel overwhelming, given that development goals are long-term and complex processes dependent on a myriad of political, economic, cultural and other factors. And while the methods for measuring, monitoring and evaluating impact remain hotly contested in the development community, two key approaches stood out in my time at International IDEA.

The first is the logical framework approach. Anyone involved in planning, monitoring or evaluating international development programs will be familiar with ‘logframes’. Pioneered in the 1970s by USAID to demonstrate what donor money was achieving, logframes have proved a very useful tool for mapping out and thinking critically about how a program leads to results.

Logframes provide a line of sight on the causal links between a program’s activities, outputs, outcomes and ultimate impacts. They offer a sense of simplicity and structure in an otherwise complex environment, can be used to communicate intentions to stakeholders, enable standardised reporting on indicators, and allow independent monitoring and evaluation of results (among other benefits).[1]

But what do outcomes really look like? How do the activities and outputs of a program lead to development impacts? To unpack this ‘black box’ in the logframe, outcome mapping (OM) has emerged as an increasingly popular methodology.

What is outcome mapping?

OM recognises that development programs are all about social change, and that social change is complex, unstable, non-linear, two-way, incremental, cumulative and often, beyond our control. Conducting evaluations in these open and changing environments poses a myriad of challenges, from defining success and deciding when to evaluate, to capturing emerging objectives and establishing cause and effect.[2]

To tackle these challenges, OM provides a framework for unpacking a program’s theory of change and collecting data on outcomes as they unfold. Importantly, it redefines ‘outcomes’ as changes in behaviour—the actions, activities and relationships—of the stakeholders directly in contact with the program (known as ‘boundary partners’). [3] This concept of boundary partners is fundamental to OM but not always present in logframes and, as a result, the two approaches often produce very different outcome statements. According to OM, behavioural change of boundary partners is critical to moving up the results chain. 

OM also recognises that, in reality, programs have limited control over whether their ultimate goal is achieved, given the range of social, political, environmental, economic and other factors that support or hinder intended outcomes. Rather than claiming attribution of a development impact, OM claims contribution to it.[4] It teaches that programs have control over their inputs, activities and outputs; influence over their outcomes; and simply an interest in the ultimate impact. In short, OM focuses on a program’s sphere of influence.[5]

In practice, OM offers 12 tools for planning, monitoring and evaluating outcomes, which can be adapted to suit individual contexts. These tools are intended to help stakeholders identify and think critically about:

  • why the program has ultimately been established
  • who the program has direct influence over (who the boundary partners are)
  • what changes in behaviour (outcomes) the program would ‘expect’, ‘like’ and ‘love’ to see from its boundary partners
  • what the qualitative and quantitative indicators (‘progress markers’) are for these outcomes
  • what strategies are in place to influence each boundary partner
  • which monitoring tools the program should use
  • what the evaluation priorities are for an evaluation plan.

It helps to build a credible picture of how a program contributes to results, putting people at the centre of development and recognising the complex and non-linear nature of social change.

So while the logframe approach remains engrained in most development agencies, practitioners should consider the value in an OM approach. As put by Michael Quinn Patton, OM affirms that ‘being attentive along the journey is as important as, and critical to, arriving at the destination’.[6]

To learn more about OM, visit the Outcome Mapping Learning Community, a one-stop shop for all things OM.


[1] http://www.focusintl.com/RBM083-2_Logical_Framework_Approach_and_Outcome_Mapping.pdf

[2] https://www.outcomemapping.ca/resource/webinar-introduction-to-outcome-mapping

[3] https://www.outcomemapping.ca/resource/outcome-mapping-a-method-for-tracking-behavioural-changes-in-development-programs

[4] http://www.focusintl.com/RBM083-2_Logical_Framework_Approach_and_Outcome_Mapping.pdf

[5] https://www.outcomemapping.ca/resource/webinar-introduction-to-outcome-mapping

[6] https://www.outcomemapping.ca/download/OM_English_final.pdf - Page 1


Evidence and persuasion

Aricle Image for Evidence and persuasion

March 2018

By Manager, Katherine Rich

Why do economic arguments hold sway in public debate? I recently attended the thought provoking Melbourne School of Government’s conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking. In a panel discussion, Economist Richard Denniss spoke about the disproportionate weighting given to economic evidence and its persuasive power in public debate. It got me thinking about why this is so.

In their simplest form, economic arguments appear easy to understand and are compelling. To illustrate this, Denniss related a story of his son asking him if he would take him to Disneyland. When Dad said ‘no’ and used an economic argument – ‘it’s too expensive, we can’t afford it’ – his son innately understood and, for the most part, accepted the decision. However, as Denniss pointed out, the concept of Disneyland not being affordable is really a value judgment rather than an objective fact. The reason Denniss’ family didn’t go to Disneyland was because they had other priorities to spend their money on.

Economic arguments, such as those made through the use of cost-benefit analyses (CBAs), can seem objective and easy to understand even though they are not – with values concealed behind a veneer of expertise and a language that not everyone understands. I agree with Denniss’ suggestion that rather than pretending a cost-benefit analysis was value neutral, advocates of particular causes should start from their value position and then make an economic case for their argument.

So what does all of this mean for evaluators when evaluative arguments are complex and can be difficult for non-evaluators to follow? We could leverage some of the same power of economic argument – make our evaluative judgments appear value neutral. However, in trying to make a holistic judgement about the merit and worth of a program, it would be problematic to use only one quantifiable metric like a cost-benefit ratio.

What we can do is bring together diverse stakeholders to first understand their perspectives and then develop a comprehensive set of criteria to assess value (see my recent post on this – a balanced approach to valuing in evaluation) and develop a logic model to clearly communicate what success will look like and how it will be measured. We can strengthen the persuasive power of these models, by drawing on social science research to develop and refine them.

When it comes to making economic arguments in evaluation, we can also look more broadly than cost-benefit analysis. Julian King’s recent publication OPM’s approach to assessing Value for Money sets out an approach to measuring value for money (VfM) that goes beyond using blunt, readily quantifiable measures like CBA, and acknowledges that some of the most valuable outcomes can be the hardest to quantify. It claims that good VfM assessments are clear about the value judgments being made. The approach uses an equity lens to capture not only the economy, effectiveness, cost effectiveness and efficiency of an intervention but reach to those most disadvantaged, acknowledging this may be costlier than reaching moderately disadvantaged people but can have greater impact.   

Economic arguments have power not because they are objective, but because they appear value neutral. As evaluators we can advocate for greater transparency of economic metrics and more nuanced approaches to VfM, and we can be explicit about how stakeholder values influence criteria and, thus, evaluative judgements.


Co-design as the reimagining, repositioning and redistribution of expertise

Aricle Image for Co-design as the reimagining, repositioning and redistribution of expertise

February 2018

By Jade Maloney

The idea that there might be a crisis of expertise in policymaking – a questioning of the role and legitimacy of expertise – is challenging for a public policy consultant. But, for an evaluator, it’s a given that evaluative evidence is only one piece in the policymaking puzzle. We might want it to have more weight, but we know that it must work in the context of politics and the democratic process.

So it was interesting to hear the various takes on the theme at Melbourne School of Government’s recent conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking.

Keynote Professor Sheila Jasanoff kicked off day one by calling into question the ‘deficit model of the public’ in the context of the rise of alternative facts. Lay people can evaluate complex information and have their own knowledge that should be valued; we need to find ways to engage them in the democratic and policymaking process. To get to the point where we can imagine alternatives, we also need to acknowledge power structures, bridge traditional binaries and speak across disciplines.

Several speakers at the conference recognised co-design as one of a suite of tools to engage citizens in policymaking processes. I presented on our growing use of this approach to help ensure policies and programs better address core problems by engaging end users in deep consideration of the problem and an iterative process of prototyping, testing and refining solutions.

At this point you may be asking where the ‘traditional’ experts are in this process. We’d say co-design does not represent the rejection of expertise but the reimagining, repositioning and redistribution of expertise. If done well, it can help to address the problems, which Darrin Durant raised, of defining one type of expertise as bad and another closing of participation by technical fiat.

In a co-design process, end users are recognised as having their own expertise to bring to the table. Experts, in the traditional sense of the term, can be involved in the design process and help to refine the model based on evidence. Practitioners – whose own knowledge has sometimes been negated in the academic literature as Brian Head pointed out – can also contribute their practical knowledge of what’s needed and what works and what doesn’t.

This may be best illustrated by a case example. In a recent project with Amaze, the autism peak body in Victoria, we used a modified co-design approach to bring key stakeholders together to iteratively develop a strategy to improve educational and social outcomes for students with autism in the school system. This is certainly one of the complex issues about which, as Col Wight noted, there are always multiple perspectives. A co-design approach enabled us to recognise that and start to build a shared understanding among a group that included people with autism who provide peer support in schools, teacher and principal representatives, support staff and peak organisations.

We began by developing a root cause analysis. This is an analytical tool to identify the causal pathways that lead to a specific problem. The aim is to work back along each causal pathway toward the ‘root causes’ of the problem, so that these can be addressed. Sounds like a technical process, which one of my audience members pointed out, but actually it begins with a whiteboard and marker and a conversation – asking individuals what they know about what underlies the issues they see.

To make sure we captured the range of perspectives on the system, we began with individual interviews with each stakeholder to draw their own map. We supplemented this with a scan of the literature and a review of the student experiences in the school system – identified in the Victorian Parliamentary Inquiry into services for people with Autism Spectrum Disorder. We then combined individual maps into a comprehensive map of the causal pathways to the problem and refined this iteratively with stakeholders through two workshops. Through discussion in a small group, stakeholders were able to understand each other’s legitimate perspectives.

Once we reached this shared understanding, stakeholders used the map to identify key points at which to intervene and the priority elements of a strategy to holistically address the problem.

From there, we worked together to develop a logic model and evaluation framework for the strategy. Again, these are technical concepts, but they can be cracked open through capacity building workshops, and doing so can build a shared understanding of what is being done and why.

In other projects, we and our clients are using co-design with people with dementia and with people who have a personal experience of anxiety, depression or suicide, or support someone who does.

Co-design might not suit every situation – and certainly not ones in which there is a predefined model – but we believe it has a lot of potential to enable participants to understand each other’s truths, break down binary thinking and collaboratively build solutions.

Thanks to the Melbourne School of Government for a thought provoking few days.


Marrying evaluation and design for use

Aricle Image for Marrying evaluation and design for use

February 2018

By Melanie Darvodelsky

We love partnering with people who share our passion for supporting positive change. So we’re excited to be partnering with Jax Wechsler from Sticky Design Studios and Amelia Loye from engage2 in our evaluation of beyondblue’s blueVoices program, which brings together people who have a personal experience of anxiety, depression or suicide, or support someone who does, to inform policies and practice.

Marrying design, engagement and evaluation expertise will enable us to provide not only evaluation findings, but a clear direction for the future, which is backed by both the organisation and blueVoices members and supports our commitment to utilisation-focused evaluation.

As Jax explained at a workshop with our Sydney staff, co-design is not just running a stakeholder workshop. Design is iterative. It involves prototyping, testing and refining. Co-design is an approach to design that actively identifies and addresses the needs of all stakeholders in the process to help support an end product that is useful across the board.

When designing services, if you skip the vital step of conducting research to understand the world from the end-user’s perspective, what you come up with may be inappropriate and not deliver the possible value it could.  

Additionally, service design does not stop in the way that product design does. Implementation is ongoing and involves many people working together over periods of time. An idea for a tool that meets staff needs at the beginning of a project may no longer be useful even by the time the tool is fully developed, as both the project and staff involved may have moved on. So designers need to think about how their work can support an ongoing change process, if they want to make sustainable impact.

Through her research and project experiences, Jax has found that designers can support lasting change in contexts of innovation through ‘artefacts’ – visual representations and models. These include personas, journey maps, infographics, flow charts and videos. Artefacts act in a ‘scaffolding’ role for a program or organisation, for example, by persuading staff about why a change is needed, facilitating empathy between stakeholder groups, and providing a tool for sense-making. Artefacts – as ‘boundary objects’ – can also support staff from different disciplines to bridge the different languages they speak and collaborate, empowering them to co-deliver change.

You can read and watch more about Jax on her website or come to Social Design Sydney on Monday, 5 March 2018 from 6:00 pm to 8:30 pm in Ultimo to discuss whether co-design is the silver bullet we hope for. Register here.


Stretching your interview skills

Aricle Image for Stretching your interview skills

February 2018

By Partner, Jade Maloney, and Consultant, Maria Koleth

Interviews and focus groups allow you to gather in-depth data on people’s experiences and understand what underlies the patterns in quantitative data. However, handling dominant voices and opening up space for divergent views and quiet types in focus groups can pose challenges for even experienced researchers. Recently, Partner, Jade Maloney, facilitated a workshop with researchers from the Australian Human Rights Commission to reflect on their practice and stretch their skills through scenario-based activities.

Here are our top five tips for successful interviews and focus groups:

  • Choose the right method for the information you need: While individual interviews are generally best when the subject matter is sensitive or you are interested in individual experiences, focus groups are great for capturing group dynamics and experiences. However, there’s also a need for pragmatism. If resourcing and time constraints prevent you from undertaking individual interviews, you can make focus groups work by specifically targeting your questions.
  • Start out well: How you start can make all the difference to how well an interview or focus group goes. Explain who you are and what your research is about. Let them ask you questions; you’re about to ask them a lot! In a group, establishing rules can set the foundation for positive interaction and provide you a reference point to return to if issues arise. Some key rules are making clear that there are no right or wrong answers, that we want to hear from everyone, that we should refrain from judging others’ points of view, and that we need to respect the confidence of the group.
  • Use a competency framework: Facilitators can use a competency framework to prepare for, rate and reflect on their skills and experience in focus groups and interviews. The ARTD competency framework, built over years of practice, specifies general competencies (e.g. being respectful and non-judgemental), competencies displayed during the interview, (giving space and focusing), and higher-order skills (group management and opening up alternatives).   
  • Play out scenarios: Despite the cliché that ‘nobody likes roleplays’, playing challenging interview and focus group situations can be a great way to try out different responses to tough situations you have come up against, so you can approach them differently next time, or to prepare for potentially challenging focus groups. It can also be fun! Thanks to Viv McWaters and Johnnie Moore from Creative Facilitation, we’ve learned that it helps to whittle a scenario down to a line and use a rapid fire approach to test responses, and then to reflect on the experience. Scenario testing can help interviewers get into the head of their interviewees. It’s always important to remember that there’s no right or wrong when it comes to testing scenarios and that something that works in one research situation might not work again.
  • Find time to reflect: With the quick turn-around times and demanding reporting requirements of applied research environments it can be difficult to take the time to systematically reflect as a team. Setting up both informal and formal opportunities for reflection on qualitative practice can help team members learn from each other’s wealth of experience.  

 Want to learn more? Speak to us about out interviewing skills workshops on 9373 9900


Beyond programs? Is principles-focused evaluation what you’re looking for?

Aricle Image for Beyond programs? Is principles-focused evaluation what you’re looking for?

January 2018

By Jade Maloney, Partner

For several years now, I’ve been getting more and more involved in service design, review and reconceptualization to respond to evolutions in the evidence base and the systems within which services operate. And, when I am designing an evaluation framework and strategy or conducting an evaluation, I tend not to be looking at programs, but at services that are operating within larger ecosystems, aiming to complement and to change other aspects of these systems in order to better support individuals and communities.

This isn’t surprising given that I am working in the Australian disability sector, which is currently undergoing significant transformation in the transition to the National Disability Insurance Scheme (NDIS). Programs are giving way to individualised funding plans that provide people with reasonable and necessary supports to achieve their goals. The future is person- rather than program-centred.

When designing and reconceptualising services in this context, it has been more feasible and appropriate to identify guiding principles, grounded in evidence, rather than prescriptive service models or 'best practice'.

But what happens when evaluating in this context, given that evaluation has traditionally been based around programs?

Fortunately, well-known evaluation theorist Michael Quinn Patton has been thinking this through. Evaluators, he has realised, are now often confronted with interventions into complex adaptive systems and principle driven approaches, rather than programs with clear and measurable goals. In this context, a principles-focused evaluation approach may be appropriate.

As Patton explained in a recent webinar for the Tamarack Institute, principles-focused evaluation is an outgrowth of developmental evaluation, which he conceived as an approach to evaluating social interventions in complex and adaptive systems.

In a principles-focused evaluation, principles become the evaluand. Evaluators consider whether the identified principle/s are meaningful to the people they are supposed to guide, adhered to in practice, and support desired results.

These are important questions because the way some principles are constructed means they fail to provide clear guidance for behaviour, and because there can be a gap between rhetoric and reality. Patton has established the GUIDE framework so evaluators can determine whether identified principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E).

I’m now looking forward to reading the books, so I can start using this approach more explicitly in my practice.


Building capacity for evaluative thinking

Aricle Image for Building capacity for evaluative thinking

January 2018

By Jade Maloney, Partner

I reckon the right time to make resolutions isn't amidst the buzz of New Year's Eve, but when the fireworks are a dim echo.

So here goes. This year, I'm committing to championing and building capacity for evaluative thinking.

If we're to believe the hype that we're living in a post truth world, this may seem like a lost cause. But while many people source their information through the echo chambers of social media, we can take comfort that the Orwellian concept of alternative facts hasn't caught on.

Also in our work in evaluation, we come across plenty of organisations and stakeholders with a commitment to collecting, reviewing and making decisions based on evidence. While there is often a gap between rhetoric and practice, evidence based (or at least evidence informed) policy is engrained in the lexicon of Western democracies.

The trouble is that evidence informed decision making can seem out of reach if evaluation is presented, in difficult to decipher jargon, as the remit of independent experts. (Of course, this is not the only trouble. In some cases it is that the commitment to evidence and evaluation is symbolic—to give an impression of legitimacy—but that's not the situation I'm thinking of here or one that I come across very often).

This is not to say that there is not real expertise involved in evaluation. But if we can't translate this into language and ways of working that all stakeholders can understand, and then bring them along on the journey, evaluators will be speaking into their own echo chamber.

And—as is clear from the literature on evaluation use (including my own study with Australasian Evaluation Society members)—if we don't involve stakeholders throughout an evaluation, then it's unlikely to be used either instrumentally or conceptually.

Focusing on building capacity to think evaluatively (rather than just capacity for evaluation) can help put informed decision making within reach.

This focus fits with the concept of process use (see Schwandt, 2015), which evidence shows can be linked to direct instrumental use of evaluation. It also supports sustainable outcomes from interactions between evaluators and stakeholders.

But what does building capacity for evaluative thinking mean in practice? For me, it means not only focusing on the task of the evaluation at hand or building capacity for evaluation activities, such as developing program logics and outcomes frameworks, but on engaging stakeholders in the practice of critical thinking that underlies evaluation.

As Schwandt (2015) describes it, critical thinking is a cognitive process as well as a set of dispositions, including being 'inquisitive, self-informed, trustful of reason, open- and fair-minded, flexible, honest in facing personal biases, willing to reconsider, diligent in pursuing relevant information, and persistent in seeking results that are as precise as the subject and circumstances of inquiry permit.' And its key application in evaluative practice is in weighing up the evidence and making value judgements.

We can crack open this process by engaging stakeholders in it. We can also translate the process into an equivalent in everyday life (for example, using value criteria, such as price convenience, quality and ambience, to make a reasoned choice between different restaurants).  This might even help people to understand how others come to different conclusions based on different value criteria.

The more often this happens, the less we may need to worry about echo chambers.