News & Blog

Marrying evaluation and design for use

Aricle Image for Marrying evaluation and design for use

February 2018

By Melanie Darvodelsky

We love partnering with people who share our passion for supporting positive change. So we’re excited to be partnering with Jax Wechsler from Sticky Design Studios and Amelia Loye from engage2 in our evaluation of beyondblue’s blueVoices program, which brings together people who have a personal experience of anxiety, depression or suicide, or support someone who does, to inform policies and practice.

Marrying design, engagement and evaluation expertise will enable us to provide not only evaluation findings, but a clear direction for the future, which is backed by both the organisation and blueVoices members and supports our commitment to utilisation-focused evaluation.

As Jax explained at a workshop with our Sydney staff, co-design is not just running a stakeholder workshop. Design is iterative. It involves prototyping, testing and refining. Co-design is an approach to design that actively identifies and addresses the needs of all stakeholders in the process to help support an end product that is useful across the board.

When designing services, if you skip the vital step of conducting research to understand the world from the end-user’s perspective, what you come up with may be inappropriate and not deliver the possible value it could.  

Additionally, service design does not stop in the way that product design does. Implementation is ongoing and involves many people working together over periods of time. An idea for a tool that meets staff needs at the beginning of a project may no longer be useful even by the time the tool is fully developed, as both the project and staff involved may have moved on. So designers need to think about how their work can support an ongoing change process, if they want to make sustainable impact.

Through her research and project experiences, Jax has found that designers can support lasting change in contexts of innovation through ‘artefacts’ – visual representations and models. These include personas, journey maps, infographics, flow charts and videos. Artefacts act in a ‘scaffolding’ role for a program or organisation, for example, by persuading staff about why a change is needed, facilitating empathy between stakeholder groups, and providing a tool for sense-making. Artefacts – as ‘boundary objects’ – can also support staff from different disciplines to bridge the different languages they speak and collaborate, empowering them to co-deliver change.

You can read and watch more about Jax on her website or come to Social Design Sydney on Monday, 5 March 2018 from 6:00 pm to 8:30 pm in Ultimo to discuss whether co-design is the silver bullet we hope for. Register here.


Stretching your interview skills

Aricle Image for Stretching your interview skills

February 2018

By Partner, Jade Maloney, and Consultant, Maria Koleth

Interviews and focus groups allow you to gather in-depth data on people’s experiences and understand what underlies the patterns in quantitative data. However, handling dominant voices and opening up space for divergent views and quiet types in focus groups can pose challenges for even experienced researchers. Recently, Partner, Jade Maloney, facilitated a workshop with researchers from the Australian Human Rights Commission to reflect on their practice and stretch their skills through scenario-based activities.

Here are our top five tips for successful interviews and focus groups:

  • Choose the right method for the information you need: While individual interviews are generally best when the subject matter is sensitive or you are interested in individual experiences, focus groups are great for capturing group dynamics and experiences. However, there’s also a need for pragmatism. If resourcing and time constraints prevent you from undertaking individual interviews, you can make focus groups work by specifically targeting your questions.
  • Start out well: How you start can make all the difference to how well an interview or focus group goes. Explain who you are and what your research is about. Let them ask you questions; you’re about to ask them a lot! In a group, establishing rules can set the foundation for positive interaction and provide you a reference point to return to if issues arise. Some key rules are making clear that there are no right or wrong answers, that we want to hear from everyone, that we should refrain from judging others’ points of view, and that we need to respect the confidence of the group.
  • Use a competency framework: Facilitators can use a competency framework to prepare for, rate and reflect on their skills and experience in focus groups and interviews. The ARTD competency framework, built over years of practice, specifies general competencies (e.g. being respectful and non-judgemental), competencies displayed during the interview, (giving space and focusing), and higher-order skills (group management and opening up alternatives).   
  • Play out scenarios: Despite the cliché that ‘nobody likes roleplays’, playing challenging interview and focus group situations can be a great way to try out different responses to tough situations you have come up against, so you can approach them differently next time, or to prepare for potentially challenging focus groups. It can also be fun! Thanks to Viv McWaters and Johnnie Moore from Creative Facilitation, we’ve learned that it helps to whittle a scenario down to a line and use a rapid fire approach to test responses, and then to reflect on the experience. Scenario testing can help interviewers get into the head of their interviewees. It’s always important to remember that there’s no right or wrong when it comes to testing scenarios and that something that works in one research situation might not work again.
  • Find time to reflect: With the quick turn-around times and demanding reporting requirements of applied research environments it can be difficult to take the time to systematically reflect as a team. Setting up both informal and formal opportunities for reflection on qualitative practice can help team members learn from each other’s wealth of experience.  

 Want to learn more? Speak to us about out interviewing skills workshops on 9373 9900


Beyond programs? Is principles-focused evaluation what you’re looking for?

Aricle Image for Beyond programs? Is principles-focused evaluation what you’re looking for?

January 2018

By Jade Maloney, Partner

For several years now, I’ve been getting more and more involved in service design, review and reconceptualization to respond to evolutions in the evidence base and the systems within which services operate. And, when I am designing an evaluation framework and strategy or conducting an evaluation, I tend not to be looking at programs, but at services that are operating within larger ecosystems, aiming to complement and to change other aspects of these systems in order to better support individuals and communities.

This isn’t surprising given that I am working in the Australian disability sector, which is currently undergoing significant transformation in the transition to the National Disability Insurance Scheme (NDIS). Programs are giving way to individualised funding plans that provide people with reasonable and necessary supports to achieve their goals. The future is person- rather than program-centred.

When designing and reconceptualising services in this context, it has been more feasible and appropriate to identify guiding principles, grounded in evidence, rather than prescriptive service models or 'best practice'.

But what happens when evaluating in this context, given that evaluation has traditionally been based around programs?

Fortunately, well-known evaluation theorist Michael Quinn Patton has been thinking this through. Evaluators, he has realised, are now often confronted with interventions into complex adaptive systems and principle driven approaches, rather than programs with clear and measurable goals. In this context, a principles-focused evaluation approach may be appropriate.

As Patton explained in a recent webinar for the Tamarack Institute, principles-focused evaluation is an outgrowth of developmental evaluation, which he conceived as an approach to evaluating social interventions in complex and adaptive systems.

In a principles-focused evaluation, principles become the evaluand. Evaluators consider whether the identified principle/s are meaningful to the people they are supposed to guide, adhered to in practice, and support desired results.

These are important questions because the way some principles are constructed means they fail to provide clear guidance for behaviour, and because there can be a gap between rhetoric and reality. Patton has established the GUIDE framework so evaluators can determine whether identified principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E).

I’m now looking forward to reading the books, so I can start using this approach more explicitly in my practice.


Building capacity for evaluative thinking

Aricle Image for Building capacity for evaluative thinking

January 2018

By Jade Maloney, Partner

I reckon the right time to make resolutions isn't amidst the buzz of New Year's Eve, but when the fireworks are a dim echo.

So here goes. This year, I'm committing to championing and building capacity for evaluative thinking.

If we're to believe the hype that we're living in a post truth world, this may seem like a lost cause. But while many people source their information through the echo chambers of social media, we can take comfort that the Orwellian concept of alternative facts hasn't caught on.

Also in our work in evaluation, we come across plenty of organisations and stakeholders with a commitment to collecting, reviewing and making decisions based on evidence. While there is often a gap between rhetoric and practice, evidence based (or at least evidence informed) policy is engrained in the lexicon of Western democracies.

The trouble is that evidence informed decision making can seem out of reach if evaluation is presented, in difficult to decipher jargon, as the remit of independent experts. (Of course, this is not the only trouble. In some cases it is that the commitment to evidence and evaluation is symbolic—to give an impression of legitimacy—but that's not the situation I'm thinking of here or one that I come across very often).

This is not to say that there is not real expertise involved in evaluation. But if we can't translate this into language and ways of working that all stakeholders can understand, and then bring them along on the journey, evaluators will be speaking into their own echo chamber.

And—as is clear from the literature on evaluation use (including my own study with Australasian Evaluation Society members)—if we don't involve stakeholders throughout an evaluation, then it's unlikely to be used either instrumentally or conceptually.

Focusing on building capacity to think evaluatively (rather than just capacity for evaluation) can help put informed decision making within reach.

This focus fits with the concept of process use (see Schwandt, 2015), which evidence shows can be linked to direct instrumental use of evaluation. It also supports sustainable outcomes from interactions between evaluators and stakeholders.

But what does building capacity for evaluative thinking mean in practice? For me, it means not only focusing on the task of the evaluation at hand or building capacity for evaluation activities, such as developing program logics and outcomes frameworks, but on engaging stakeholders in the practice of critical thinking that underlies evaluation.

As Schwandt (2015) describes it, critical thinking is a cognitive process as well as a set of dispositions, including being 'inquisitive, self-informed, trustful of reason, open- and fair-minded, flexible, honest in facing personal biases, willing to reconsider, diligent in pursuing relevant information, and persistent in seeking results that are as precise as the subject and circumstances of inquiry permit.' And its key application in evaluative practice is in weighing up the evidence and making value judgements.

We can crack open this process by engaging stakeholders in it. We can also translate the process into an equivalent in everyday life (for example, using value criteria, such as price convenience, quality and ambience, to make a reasoned choice between different restaurants).  This might even help people to understand how others come to different conclusions based on different value criteria.

The more often this happens, the less we may need to worry about echo chambers.