News & Blog

Taking a trauma-informed approach to research and evaluation

Aricle Image for Taking a trauma-informed approach to research and evaluation

March 2018

By Maria Koleth

Last week, all of ARTD’s staff attended a challenging and informative training day on Trauma-informed Care and Practice with the Blue Knot Foundation.  As a company that takes a strengths-based approach to research and evaluation with vulnerable populations, ensuring that our methods and instruments are trauma-informed is a key part of our process. The Blue Knot training renewed and updated our understanding of types of trauma, its long-term embodied effects, and the five principles of trauma-informed practice. Some key ways we are putting these principles into practice in our research include:

  • Prioritising safety: With a clear understanding that there is a widespread incidence of people impacted by trauma throughout the community, prioritising safety is an important part of our standard practice. Our interviewers and focus group facilitators establish clear boundaries around their role and the focus of discussions.  We also establish clear response protocols in case a participant becomes distressed, including referrals to other services if they need more support. The training also reminded us to continue to support our own staff who are vicariously exposed to stories of trauma and to ensure regular opportunities for them to debrief.   
  • Being trustworthy: We set clear expectations when getting consent from participants and then we ensure that we follow through on what we said we would do. Embodying trustworthiness also involves getting to appointments on time and staying within the boundaries of the research. This is important to establishing trust and a space in which people feel comfortable to share their story.
  • Offering choice: Maximising the control that evaluation and research participants have and being flexible in accommodating their needs is important for trauma-informed practice. We work to offer participants as many choices as possible, from where interviews take place to whether they would like a support person to attend with them to which questions they choose to answer.
  • Taking a collaborative approach: Increasingly, we have been using collaborative approaches. These help to address the unequal relationships between researchers and participants by making research projects a collaborative space.
  • Empowering participants: The theme of empowerment runs through many of our projects and the projects we evaluate. We recognise that people who have experienced trauma have often had their experiences minimised or invalidated in the past, so it’s important that we express trust in their responses in interviews and focus groups, recognise their resilience, and emphasise the importance of their contributions to projects.

We would also like to thank 107 Projects for hosting us –  their wonderful garden and sense of hospitality provided the most recuperative setting for our training day (see 

Outcome Mapping: unpacking the black box between outputs and impacts

Aricle Image for Outcome Mapping: unpacking the black box between outputs and impacts

March 2018

By Ruby Leahy Gatfield

My recent internship at the International Institute for Democracy and Electoral Assistance (International IDEA) in Stockholm raised a number of important questions for me about how to monitor and evaluate international development programs. Trying to demonstrate a program’s ‘impact’ can often feel overwhelming, given that development goals are long-term and complex processes dependent on a myriad of political, economic, cultural and other factors. And while the methods for measuring, monitoring and evaluating impact remain hotly contested in the development community, two key approaches stood out in my time at International IDEA.

The first is the logical framework approach. Anyone involved in planning, monitoring or evaluating international development programs will be familiar with ‘logframes’. Pioneered in the 1970s by USAID to demonstrate what donor money was achieving, logframes have proved a very useful tool for mapping out and thinking critically about how a program leads to results.

Logframes provide a line of sight on the causal links between a program’s activities, outputs, outcomes and ultimate impacts. They offer a sense of simplicity and structure in an otherwise complex environment, can be used to communicate intentions to stakeholders, enable standardised reporting on indicators, and allow independent monitoring and evaluation of results (among other benefits).[1]

But what do outcomes really look like? How do the activities and outputs of a program lead to development impacts? To unpack this ‘black box’ in the logframe, outcome mapping (OM) has emerged as an increasingly popular methodology.

What is outcome mapping?

OM recognises that development programs are all about social change, and that social change is complex, unstable, non-linear, two-way, incremental, cumulative and often, beyond our control. Conducting evaluations in these open and changing environments poses a myriad of challenges, from defining success and deciding when to evaluate, to capturing emerging objectives and establishing cause and effect.[2]

To tackle these challenges, OM provides a framework for unpacking a program’s theory of change and collecting data on outcomes as they unfold. Importantly, it redefines ‘outcomes’ as changes in behaviour—the actions, activities and relationships—of the stakeholders directly in contact with the program (known as ‘boundary partners’). [3] This concept of boundary partners is fundamental to OM but not always present in logframes and, as a result, the two approaches often produce very different outcome statements. According to OM, behavioural change of boundary partners is critical to moving up the results chain. 

OM also recognises that, in reality, programs have limited control over whether their ultimate goal is achieved, given the range of social, political, environmental, economic and other factors that support or hinder intended outcomes. Rather than claiming attribution of a development impact, OM claims contribution to it.[4] It teaches that programs have control over their inputs, activities and outputs; influence over their outcomes; and simply an interest in the ultimate impact. In short, OM focuses on a program’s sphere of influence.[5]

In practice, OM offers 12 tools for planning, monitoring and evaluating outcomes, which can be adapted to suit individual contexts. These tools are intended to help stakeholders identify and think critically about:

  • why the program has ultimately been established
  • who the program has direct influence over (who the boundary partners are)
  • what changes in behaviour (outcomes) the program would ‘expect’, ‘like’ and ‘love’ to see from its boundary partners
  • what the qualitative and quantitative indicators (‘progress markers’) are for these outcomes
  • what strategies are in place to influence each boundary partner
  • which monitoring tools the program should use
  • what the evaluation priorities are for an evaluation plan.

It helps to build a credible picture of how a program contributes to results, putting people at the centre of development and recognising the complex and non-linear nature of social change.

So while the logframe approach remains engrained in most development agencies, practitioners should consider the value in an OM approach. As put by Michael Quinn Patton, OM affirms that ‘being attentive along the journey is as important as, and critical to, arriving at the destination’.[6]

To learn more about OM, visit the Outcome Mapping Learning Community, a one-stop shop for all things OM.






[6] - Page 1

Evidence and persuasion

Aricle Image for Evidence and persuasion

March 2018

By Manager, Katherine Rich

Why do economic arguments hold sway in public debate? I recently attended the thought provoking Melbourne School of Government’s conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking. In a panel discussion, Economist Richard Denniss spoke about the disproportionate weighting given to economic evidence and its persuasive power in public debate. It got me thinking about why this is so.

In their simplest form, economic arguments appear easy to understand and are compelling. To illustrate this, Denniss related a story of his son asking him if he would take him to Disneyland. When Dad said ‘no’ and used an economic argument – ‘it’s too expensive, we can’t afford it’ – his son innately understood and, for the most part, accepted the decision. However, as Denniss pointed out, the concept of Disneyland not being affordable is really a value judgment rather than an objective fact. The reason Denniss’ family didn’t go to Disneyland was because they had other priorities to spend their money on.

Economic arguments, such as those made through the use of cost-benefit analyses (CBAs), can seem objective and easy to understand even though they are not – with values concealed behind a veneer of expertise and a language that not everyone understands. I agree with Denniss’ suggestion that rather than pretending a cost-benefit analysis was value neutral, advocates of particular causes should start from their value position and then make an economic case for their argument.

So what does all of this mean for evaluators when evaluative arguments are complex and can be difficult for non-evaluators to follow? We could leverage some of the same power of economic argument – make our evaluative judgments appear value neutral. However, in trying to make a holistic judgement about the merit and worth of a program, it would be problematic to use only one quantifiable metric like a cost-benefit ratio.

What we can do is bring together diverse stakeholders to first understand their perspectives and then develop a comprehensive set of criteria to assess value (see my recent post on this – a balanced approach to valuing in evaluation) and develop a logic model to clearly communicate what success will look like and how it will be measured. We can strengthen the persuasive power of these models, by drawing on social science research to develop and refine them.

When it comes to making economic arguments in evaluation, we can also look more broadly than cost-benefit analysis. Julian King’s recent publication OPM’s approach to assessing Value for Money sets out an approach to measuring value for money (VfM) that goes beyond using blunt, readily quantifiable measures like CBA, and acknowledges that some of the most valuable outcomes can be the hardest to quantify. It claims that good VfM assessments are clear about the value judgments being made. The approach uses an equity lens to capture not only the economy, effectiveness, cost effectiveness and efficiency of an intervention but reach to those most disadvantaged, acknowledging this may be costlier than reaching moderately disadvantaged people but can have greater impact.   

Economic arguments have power not because they are objective, but because they appear value neutral. As evaluators we can advocate for greater transparency of economic metrics and more nuanced approaches to VfM, and we can be explicit about how stakeholder values influence criteria and, thus, evaluative judgements.

Co-design as the reimagining, repositioning and redistribution of expertise

Aricle Image for Co-design as the reimagining, repositioning and redistribution of expertise

February 2018

By Jade Maloney

The idea that there might be a crisis of expertise in policymaking – a questioning of the role and legitimacy of expertise – is challenging for a public policy consultant. But, for an evaluator, it’s a given that evaluative evidence is only one piece in the policymaking puzzle. We might want it to have more weight, but we know that it must work in the context of politics and the democratic process.

So it was interesting to hear the various takes on the theme at Melbourne School of Government’s recent conference:  A Crisis of Expertise? Legitimacy and the Challenge of Policymaking.

Keynote Professor Sheila Jasanoff kicked off day one by calling into question the ‘deficit model of the public’ in the context of the rise of alternative facts. Lay people can evaluate complex information and have their own knowledge that should be valued; we need to find ways to engage them in the democratic and policymaking process. To get to the point where we can imagine alternatives, we also need to acknowledge power structures, bridge traditional binaries and speak across disciplines.

Several speakers at the conference recognised co-design as one of a suite of tools to engage citizens in policymaking processes. I presented on our growing use of this approach to help ensure policies and programs better address core problems by engaging end users in deep consideration of the problem and an iterative process of prototyping, testing and refining solutions.

At this point you may be asking where the ‘traditional’ experts are in this process. We’d say co-design does not represent the rejection of expertise but the reimagining, repositioning and redistribution of expertise. If done well, it can help to address the problems, which Darrin Durant raised, of defining one type of expertise as bad and another closing of participation by technical fiat.

In a co-design process, end users are recognised as having their own expertise to bring to the table. Experts, in the traditional sense of the term, can be involved in the design process and help to refine the model based on evidence. Practitioners – whose own knowledge has sometimes been negated in the academic literature as Brian Head pointed out – can also contribute their practical knowledge of what’s needed and what works and what doesn’t.

This may be best illustrated by a case example. In a recent project with Amaze, the autism peak body in Victoria, we used a modified co-design approach to bring key stakeholders together to iteratively develop a strategy to improve educational and social outcomes for students with autism in the school system. This is certainly one of the complex issues about which, as Col Wight noted, there are always multiple perspectives. A co-design approach enabled us to recognise that and start to build a shared understanding among a group that included people with autism who provide peer support in schools, teacher and principal representatives, support staff and peak organisations.

We began by developing a root cause analysis. This is an analytical tool to identify the causal pathways that lead to a specific problem. The aim is to work back along each causal pathway toward the ‘root causes’ of the problem, so that these can be addressed. Sounds like a technical process, which one of my audience members pointed out, but actually it begins with a whiteboard and marker and a conversation – asking individuals what they know about what underlies the issues they see.

To make sure we captured the range of perspectives on the system, we began with individual interviews with each stakeholder to draw their own map. We supplemented this with a scan of the literature and a review of the student experiences in the school system – identified in the Victorian Parliamentary Inquiry into services for people with Autism Spectrum Disorder. We then combined individual maps into a comprehensive map of the causal pathways to the problem and refined this iteratively with stakeholders through two workshops. Through discussion in a small group, stakeholders were able to understand each other’s legitimate perspectives.

Once we reached this shared understanding, stakeholders used the map to identify key points at which to intervene and the priority elements of a strategy to holistically address the problem.

From there, we worked together to develop a logic model and evaluation framework for the strategy. Again, these are technical concepts, but they can be cracked open through capacity building workshops, and doing so can build a shared understanding of what is being done and why.

In other projects, we and our clients are using co-design with people with dementia and with people who have a personal experience of anxiety, depression or suicide, or support someone who does.

Co-design might not suit every situation – and certainly not ones in which there is a predefined model – but we believe it has a lot of potential to enable participants to understand each other’s truths, break down binary thinking and collaboratively build solutions.

Thanks to the Melbourne School of Government for a thought provoking few days.

Marrying evaluation and design for use

Aricle Image for Marrying evaluation and design for use

February 2018

By Melanie Darvodelsky

We love partnering with people who share our passion for supporting positive change. So we’re excited to be partnering with Jax Wechsler from Sticky Design Studios and Amelia Loye from engage2 in our evaluation of beyondblue’s blueVoices program, which brings together people who have a personal experience of anxiety, depression or suicide, or support someone who does, to inform policies and practice.

Marrying design, engagement and evaluation expertise will enable us to provide not only evaluation findings, but a clear direction for the future, which is backed by both the organisation and blueVoices members and supports our commitment to utilisation-focused evaluation.

As Jax explained at a workshop with our Sydney staff, co-design is not just running a stakeholder workshop. Design is iterative. It involves prototyping, testing and refining. Co-design is an approach to design that actively identifies and addresses the needs of all stakeholders in the process to help support an end product that is useful across the board.

When designing services, if you skip the vital step of conducting research to understand the world from the end-user’s perspective, what you come up with may be inappropriate and not deliver the possible value it could.  

Additionally, service design does not stop in the way that product design does. Implementation is ongoing and involves many people working together over periods of time. An idea for a tool that meets staff needs at the beginning of a project may no longer be useful even by the time the tool is fully developed, as both the project and staff involved may have moved on. So designers need to think about how their work can support an ongoing change process, if they want to make sustainable impact.

Through her research and project experiences, Jax has found that designers can support lasting change in contexts of innovation through ‘artefacts’ – visual representations and models. These include personas, journey maps, infographics, flow charts and videos. Artefacts act in a ‘scaffolding’ role for a program or organisation, for example, by persuading staff about why a change is needed, facilitating empathy between stakeholder groups, and providing a tool for sense-making. Artefacts – as ‘boundary objects’ – can also support staff from different disciplines to bridge the different languages they speak and collaborate, empowering them to co-deliver change.

You can read and watch more about Jax on her website or come to Social Design Sydney on Monday, 5 March 2018 from 6:00 pm to 8:30 pm in Ultimo to discuss whether co-design is the silver bullet we hope for. Register here.

Stretching your interview skills

Aricle Image for Stretching your interview skills

February 2018

By Partner, Jade Maloney, and Consultant, Maria Koleth

Interviews and focus groups allow you to gather in-depth data on people’s experiences and understand what underlies the patterns in quantitative data. However, handling dominant voices and opening up space for divergent views and quiet types in focus groups can pose challenges for even experienced researchers. Recently, Partner, Jade Maloney, facilitated a workshop with researchers from the Australian Human Rights Commission to reflect on their practice and stretch their skills through scenario-based activities.

Here are our top five tips for successful interviews and focus groups:

  • Choose the right method for the information you need: While individual interviews are generally best when the subject matter is sensitive or you are interested in individual experiences, focus groups are great for capturing group dynamics and experiences. However, there’s also a need for pragmatism. If resourcing and time constraints prevent you from undertaking individual interviews, you can make focus groups work by specifically targeting your questions.
  • Start out well: How you start can make all the difference to how well an interview or focus group goes. Explain who you are and what your research is about. Let them ask you questions; you’re about to ask them a lot! In a group, establishing rules can set the foundation for positive interaction and provide you a reference point to return to if issues arise. Some key rules are making clear that there are no right or wrong answers, that we want to hear from everyone, that we should refrain from judging others’ points of view, and that we need to respect the confidence of the group.
  • Use a competency framework: Facilitators can use a competency framework to prepare for, rate and reflect on their skills and experience in focus groups and interviews. The ARTD competency framework, built over years of practice, specifies general competencies (e.g. being respectful and non-judgemental), competencies displayed during the interview, (giving space and focusing), and higher-order skills (group management and opening up alternatives).   
  • Play out scenarios: Despite the cliché that ‘nobody likes roleplays’, playing challenging interview and focus group situations can be a great way to try out different responses to tough situations you have come up against, so you can approach them differently next time, or to prepare for potentially challenging focus groups. It can also be fun! Thanks to Viv McWaters and Johnnie Moore from Creative Facilitation, we’ve learned that it helps to whittle a scenario down to a line and use a rapid fire approach to test responses, and then to reflect on the experience. Scenario testing can help interviewers get into the head of their interviewees. It’s always important to remember that there’s no right or wrong when it comes to testing scenarios and that something that works in one research situation might not work again.
  • Find time to reflect: With the quick turn-around times and demanding reporting requirements of applied research environments it can be difficult to take the time to systematically reflect as a team. Setting up both informal and formal opportunities for reflection on qualitative practice can help team members learn from each other’s wealth of experience.  

 Want to learn more? Speak to us about out interviewing skills workshops on 9373 9900

Beyond programs? Is principles-focused evaluation what you’re looking for?

Aricle Image for Beyond programs? Is principles-focused evaluation what you’re looking for?

January 2018

By Jade Maloney, Partner

For several years now, I’ve been getting more and more involved in service design, review and reconceptualization to respond to evolutions in the evidence base and the systems within which services operate. And, when I am designing an evaluation framework and strategy or conducting an evaluation, I tend not to be looking at programs, but at services that are operating within larger ecosystems, aiming to complement and to change other aspects of these systems in order to better support individuals and communities.

This isn’t surprising given that I am working in the Australian disability sector, which is currently undergoing significant transformation in the transition to the National Disability Insurance Scheme (NDIS). Programs are giving way to individualised funding plans that provide people with reasonable and necessary supports to achieve their goals. The future is person- rather than program-centred.

When designing and reconceptualising services in this context, it has been more feasible and appropriate to identify guiding principles, grounded in evidence, rather than prescriptive service models or 'best practice'.

But what happens when evaluating in this context, given that evaluation has traditionally been based around programs?

Fortunately, well-known evaluation theorist Michael Quinn Patton has been thinking this through. Evaluators, he has realised, are now often confronted with interventions into complex adaptive systems and principle driven approaches, rather than programs with clear and measurable goals. In this context, a principles-focused evaluation approach may be appropriate.

As Patton explained in a recent webinar for the Tamarack Institute, principles-focused evaluation is an outgrowth of developmental evaluation, which he conceived as an approach to evaluating social interventions in complex and adaptive systems.

In a principles-focused evaluation, principles become the evaluand. Evaluators consider whether the identified principle/s are meaningful to the people they are supposed to guide, adhered to in practice, and support desired results.

These are important questions because the way some principles are constructed means they fail to provide clear guidance for behaviour, and because there can be a gap between rhetoric and reality. Patton has established the GUIDE framework so evaluators can determine whether identified principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E).

I’m now looking forward to reading the books, so I can start using this approach more explicitly in my practice.

Building capacity for evaluative thinking

Aricle Image for Building capacity for evaluative thinking

January 2018

By Jade Maloney, Partner

I reckon the right time to make resolutions isn't amidst the buzz of New Year's Eve, but when the fireworks are a dim echo.

So here goes. This year, I'm committing to championing and building capacity for evaluative thinking.

If we're to believe the hype that we're living in a post truth world, this may seem like a lost cause. But while many people source their information through the echo chambers of social media, we can take comfort that the Orwellian concept of alternative facts hasn't caught on.

Also in our work in evaluation, we come across plenty of organisations and stakeholders with a commitment to collecting, reviewing and making decisions based on evidence. While there is often a gap between rhetoric and practice, evidence based (or at least evidence informed) policy is engrained in the lexicon of Western democracies.

The trouble is that evidence informed decision making can seem out of reach if evaluation is presented, in difficult to decipher jargon, as the remit of independent experts. (Of course, this is not the only trouble. In some cases it is that the commitment to evidence and evaluation is symbolic—to give an impression of legitimacy—but that's not the situation I'm thinking of here or one that I come across very often).

This is not to say that there is not real expertise involved in evaluation. But if we can't translate this into language and ways of working that all stakeholders can understand, and then bring them along on the journey, evaluators will be speaking into their own echo chamber.

And—as is clear from the literature on evaluation use (including my own study with Australasian Evaluation Society members)—if we don't involve stakeholders throughout an evaluation, then it's unlikely to be used either instrumentally or conceptually.

Focusing on building capacity to think evaluatively (rather than just capacity for evaluation) can help put informed decision making within reach.

This focus fits with the concept of process use (see Schwandt, 2015), which evidence shows can be linked to direct instrumental use of evaluation. It also supports sustainable outcomes from interactions between evaluators and stakeholders.

But what does building capacity for evaluative thinking mean in practice? For me, it means not only focusing on the task of the evaluation at hand or building capacity for evaluation activities, such as developing program logics and outcomes frameworks, but on engaging stakeholders in the practice of critical thinking that underlies evaluation.

As Schwandt (2015) describes it, critical thinking is a cognitive process as well as a set of dispositions, including being 'inquisitive, self-informed, trustful of reason, open- and fair-minded, flexible, honest in facing personal biases, willing to reconsider, diligent in pursuing relevant information, and persistent in seeking results that are as precise as the subject and circumstances of inquiry permit.' And its key application in evaluative practice is in weighing up the evidence and making value judgements.

We can crack open this process by engaging stakeholders in it. We can also translate the process into an equivalent in everyday life (for example, using value criteria, such as price convenience, quality and ambience, to make a reasoned choice between different restaurants).  This might even help people to understand how others come to different conclusions based on different value criteria.

The more often this happens, the less we may need to worry about echo chambers.