By Alexandra Lorigan
‘Agile’ has emerged as one of the latest buzzwords in Government Departments. But what does it mean to be agile, and can we do more agile evaluation?
On Wednesday 2 May, Florent Gomez from the NSW Department of Finance and ARTD Partner Jade Maloney delivered a free lunchtime seminar in Sydney organised by the Australasian Evaluation Society (AES).
Drawing on ideas from an article by Caroline Heider at the World Bank, Florent introduced the Agile project management methodology, and participants then discussed how this could apply to evaluation, if at all.
So, what does it mean to be ‘agile’ and is there a place for it in evaluation?
Agile originated in the IT world as a project management methodology that uses short development cycles. In 2001, 17 software developers formalised the approach into a set of 12 key principles known as the Agile Manifesto. From the first two principles, it is clear that at its core, the Agile approach is customer-centred, deeply collaborative and constantly adapting. This is in contrast to the traditional ‘waterfall’ approach to IT project management, where the project plan is designed at the outset and then followed in sequence, with little flexibility for customer input.
Another key aspect of the Agile approach is the commitment to speed and efficiency, which Moira Alexander highlights in her article, ‘Agile project management: A comprehensive guide’. According to Alexander, the desire for rapid adaptation and optimal design, requires both simplicity and a high level of self-organisation and accountability by teams.
Though the Agile methodology originated in the software industry and continues to boast an adoption rate of 23%, it has since been used by a number of other key industries. Among these is government, which uses the methodology on roughly 5% of its projects.
Florent became interested in the concept when he joined a government agency, where many projects were delivered based on the Agile methodology and the ‘A’ word was heard everywhere. In his new role as internal Evaluation Manager, the expectation was also to evaluate these projects in an agile way, meaning, within very short timeframes.
In her article, Caroline Heider suggests how the concept of agile could be translated to evaluation practice. In addition to narrowing the scope of the project, she suggested evaluation could be more agile, or efficient, by drawing on:
Interestingly, most participants agreed that Heider’s suggested approaches to shortening response times are already widely practiced in the evaluation world.
So do AES members see the potential for evaluation to be more agile?
With this grounding, it was turned over to us evaluators to envisage whether we could see the potential for more agility in our work. Specifically, we were asked to consider the benefits, enablers and risks to evaluation being more agile.
Participants agreed that by being more agile, we could make evaluation more focussed, responsive, creative and ultimately, produce more useful products. However, they acknowledged that being able to make evaluation agile would depend on:
While Heider’s primary message was that there are ways to make evaluation more agile, both she and AES members acknowledged the risk of quality loss. Participants expressed fears that agile had the potential to become too ‘quick and dirty’ to produce meaningful results. They noted that evaluators may risk avoiding slow, but often necessary, methods of data collection in favour of faster, possibly unsuitable, methods. Additionally, participants identified the risk of both scope creep, which affects project budgets, and scope narrowing, which could limit capacity to make well-informed recommendations.
So, where does that leave us?
The workshop generated useful discussion and allowed evaluators to consider how they be more agile without compromising the quality of their work.
Participants identified clear synergies between Agile project management and developmental evaluation—developing a program in real-time through close consultation with program staff—and utilisation-focussed evaluation—conducting evaluations with a focus on intended use by end users.
As firm believers in our own ‘lunchtime learnings’, ARTD looks forward to attending more of these short and engaging lunchtime sessions in the future. You can visit the AES website for a full list of upcoming events.
By Brad Astbury, Senior Manager
Evaluation is a young discipline, especially when compared to other fields of inquiry like sociology, economics and psychology. Even so, there exists a rich intellectual history and vibrant body of theoretical knowledge that continues to grow and evolve. When I was invited to deliver a workshop on ‘Theories of Evaluation’ for the Australasian Evaluation Society (AES) autumn intensive, there was one take-home message I wanted to convey—if evaluators are not tapping into the 60-year repository of hard-won lessons, they are missing out on considerable wise counsel. This ‘wise counsel’ can greatly improve the quality and utility of evaluation practice.
Without knowledge of evaluation theory, we are susceptible to repeating mistakes of the past and relying on little more than professional folklore and an assortment of methods with no guiding principles for their application.
The formal aims of the workshop were to provide participants with a better understanding of:
As a passionate advocate of the late Will Shadish’s maxim ‘Evaluation theory is who we are’, I remain committed to the view that evaluation theory is central to our professional identity. Here’s some tough love from Shadish (1997):
If you do not know much about evaluation theory, you are not an evaluator…to be an evaluator, you need to know that knowledge base that makes the field unique. That unique knowledge base is evaluation theory (pp. 6-7).
So what exactly is evaluation theory? A broad answer is that evaluation theory is the body of writings about evaluation that have at their core a concern for practice. Another response is to view evaluation theory as a set of prescriptions about what constitutes ‘good’ evaluation and how it ought to be conducted (as detailed in alternative evaluation models or approaches). Yet another perspective considers evaluation theory as comprising several meta-components including use, knowledge construction, valuing, social programming and practice. In my view, all three framings are important and should be considered within an integrated perspective on evaluation theory.
During the first part of the workshop, we examined these different ways of thinking about evaluation theory, drawing on a conceptualisation that I developed a few years ago to help postgraduate students circumnavigate the ‘theory jungle’. As part of the discussion, we compared the ‘big seven’ theorists: Donald Campbell, Michael Scriven, Robert Stake, Carol Weiss, Joseph Wholey, Peter Rossi and Lee Cronbach, as well as other ‘intellectual heroes’ that have found a place on Alkin’s infamous theory tree.
In the second part of the workshop, I presented an example of how a realist theory of evaluation can be useful for guiding practice, especially when the purpose of the evaluation is to support program improvement and transferability. I also offered some insights from Ernest House about how research on cognitive thinking and bias can prevent evaluators from making fundamental errors when drawing evaluative conclusions. As a group we brainstormed strategies to help determine when and where to use different evaluation theories and approaches, given considerations of time, budget, data, stage of program development, and so on. There is no single or ideal theory of evaluation that will work always and everywhere. It is critical to select and combine approaches in response to situational contingencies.
In the final session, I emphasised the importance of evaluating evaluation theory and considered different criteria that might be useful for reflecting as a discipline on which theories of evaluation are ‘better’ than others. Should we continue to ‘let a thousand flowers bloom’ or is diversity of evaluation models and approaches leading to fragmentation of the field? My own view on this is that we need to get better at determining which theories of evaluation have merit and which are whimsical fads and fashions (or worse, harmful ‘snake oil’). Evaluators need to reflect an evaluative gaze back on evaluation itself.
I invite readers of this blog to reflect deeply on two questions presented to participants during the workshop.
As part of the reaction this exercise triggers, I hope that evaluation theory becomes an increasingly prominent resource that you draw upon to inform everyday practice and decision-making.
Thanks to the AES for hosting and organising the autumn professional learning program and to the many participants who attended workshops delivered over this inaugural three-day event.
By Partner, Jade Maloney
Settlement Services International (SSI) is one of the largest providers of Ability Links – a NSW Government-funded initiative that empowers people with disability, their families and carers to work towards their goals by building on their strengths and connecting with their local communities, and supporting local community and mainstream organisations to become more inclusive.
SSI has Linkers in over 40 LGAs, and works in partnership with Uniting and St Vincent de Paul NSW in all metropolitan Family and Community Services Districts as well as in Illawarra/ Shoalhaven and Southern NSW. SSI’s delivery locations include the Local Government Areas (LGAs) with the largest CALD populations in NSW.
From the state-wide evaluation of Ability Links, SSI knew the initiative was achieving positive outcomes for individuals and a return on investment for the NSW Government. What they didn’t know was how they were supporting outcomes for the diverse individuals and communities they supported.
In late July 2017, SSI engaged ARTD to evaluate their delivery of Ability Links – with a focus on benchmarking their performance against the program as a whole and understanding how they were supporting outcomes and what improvements could be made.
To understand the ‘how’ underlying SSI’s outcomes for all of the individuals it was supporting, as well as individuals from CALD communities, we used a realist-informed approach – identifying theories with an evaluation steering group and testing and iteratively developing these through a series of interviews with Linkers employed by SSI and community organisations, and finally a participant reference group.
The state-wide evaluation had already engaged with people supported through Ability Links, and SSI was engaging with the people it was supporting to develop a book of their stories (published in Our Community: Stories of Courage Strength and Determination) and conduct a longitudinal Participant Wellbeing Study. So we had to be careful to make use of available data to avoid creating an additional data collection burden, while still putting people with disability at the centre of the evaluation – following the philosophy of ‘nothing about us without us’.
We were able to do this with the participant reference group. With the assistance of two language interpreters, a physically accessible venue, and a discussion approach that was inclusive for people with a vision impairment – we were able to talk through, test and refine the emerging ‘theories’ about how SSI’s Ability Links supports outcomes with eight people who had accessed Ability Links, as well as identify improvements. This process helped to ensure the evaluation team interpreted the findings in context.
The evaluation identified a range of factors supporting positive outcomes, some of which were unique to SSI – such as their workforce of Linkers from diverse cultural and language backgrounds embedded within their communities – and some of which were tied to the flexible, person-centred and responsive nature of Ability Links. The evaluation also found that Linkers supported outcomes for individuals in varying ways – depending on their starting points, needs and goals. In some cases, people come with ideas and Linkers help to make these happen, while in others, Linkers help to turn people’s interests into ideas for community connections. Linkers can also build people’s confidence in varying ways: through the encouragement of a Linker or through social connections.
SSI is using the findings to inform its service delivery and recently shared its learnings at the DiverseAbility Conference. You can access a summary of the findings of the evaluation on SSI’s website.
By Consultant, Ken Fullerton
Are the programs logics that you see actually logical?
On Wednesday 11 April 2018, ARTD Partner Andrew Hawkins, delivered a free seminar in Sydney organised by the Australasian Evaluation Society (AES), which was attended by approximately 50 people.
Hawkins first briefly introduced his subject. His generalist definition of a program logic is “A one-page diagram or model of the important components of an intervention and how they work together to deliver outcomes.”
He then gave attendees three example program logics to use as references in their discussions of two key questions.
Question 1: What makes program logics logical?
Question 2: What do the arrows mean in a program logic?
Later, Hawkins gave an overview of his own approach to program logic. He argued that a ‘configurationalist’ understanding of causality would be more useful than the ‘successonist’ understanding deployed in many program logics. His point was that effective programs are better thought of as a ‘causal package’ with certain assumptions, like a recipe, rather than a linear ‘causal chain’ where one element in the program logic causes the next one.
Hawkins believes a program is better thought of as a proposition or argument that a course of action will be sufficient to bring about a desired result, rather than a theory about change or a theory about action. He said that while theory was very important for providing reasons, justifications or ‘warrants’ for elements of the program design (and for the program as a whole), he thinks it is too much for a program logic diagram to display this theory.
Instead, he proposed focusing on the conditions that program activities needed to achieve that, together, would be enough to enable an intended outcome to be achieved. He argued that this approach enables critical thinking (that can support realistic design) and evaluations focused on measuring outcomes a program is sufficient for achieving its objectives. It can also support the program’s argument or business case.
By Senior Manager, Alexandra Ellinson
The Politics of Evidence: From evidence-based policy to the good governance of evidence, Justin Parkhurst, Routledge Studies in Governance and Public Policy, 2017.
As evaluators, we want to generate credible evidence that is relevant and useful for informing public policy decisions that improve social outcomes. So, it’s unsurprising that Justin Parkhurst’s recently released book—or more specifically, it’s subtitle, ‘From evidence-based policy to the good governance of evidence’—caught my eye.
This book is a helpful reminder of the imperative that evidence-based policy making addresses the reality of politics head-on; and that this can (and should) be done without giving up on rigorous research. As Associate Professor at the London School of Economics and Political Science, Parkhurst shares in the concerns of both those who champion evidence in the face of the politicisation of science, and those who critique the de-politicisation of policy making that can occur when social values are obscured through the acceptance or promotion of only limited evidence sources and methods. In what follows, I briefly outline the key points and conceptual moves that Parkhurst makes in this book, and highlight what I take to be most relevant to policy makers and evaluators alike: his argument that there is a need for advisory systems that normatively embed the good use of evidence in policy making.
Part I opens with an exploration of what is meant by the oft-touted phrase, ‘evidence-based policy’. Parkhurst clearly unpacks the usefulness and limitations of framing policy making simply around the aim of ‘doing what works’. Starting from the premise that evidence matters a great deal for good policy, Parkhurst gives examples of what goes wrong when there is a lack of information or poorly used evidence e.g. the public health advice that babies should sleep on their fronts, which continued for decades despite mounting data about the dangers of this practice. Yet he also acknowledges the limits of technocratic cries for ‘more evidence!’ to inform policy decisions that are essentially principle or rights-based e.g. access to reproductive health choices or same-sex marriage provisions.
Parkhurst goes on to discuss the value of evidence from well-designed research methods with nous, and displays an appreciation of methodological pluralism when it comes to appropriately understanding social issues. This includes an important, but by no means preeminent, place for randomised controlled trials. He also gives a quick nod to realist evaluation by challenging readers to think about more than just ‘what works’, but also consider what works ‘for whom’ and ‘where’. While devotees of realist approaches might take issue with how he employs the term ‘mechanism’ in this discussion (given its very precise meaning in the field), they are likely to endorse the spirit of his argument.
Interestingly, Parkhurst doesn’t discuss or appear to make a distinction between evidence-based and evidence-informed policy: he primarily uses the first term, although at times the latter. While this seems unusual insofar as evidence-informed policy is often put forward as a more pragmatic and politically tuned-in alternative, I think this is also a smart move on his part. It works to avoid definitional debates and micro-arguments around the degree of political influence before something is described as ‘based in’ or ‘informed by’ evidence, but it also allows him to locate his concerns as part of a bigger picture that applies across cases.
Part II contains an interesting exploration of the role and functioning of various forms of bias that impact not only on what and how evidence is used by policy makers, but also the evidence that is funded and generated by researchers in the first place. Parkhurst discusses overt biases in pursuit of political interests as well as the subtle politics of cognitive biases. Perhaps most useful is a distinction that Parkhurst makes between ‘technical biases’ in the use of evidence and ‘issue biases’ in how evidence is deployed to inform political debates. In relation to technical biases, Parkhurst serves readers well by including both the invalid use of individual methods (i.e. poor scientific practice) and the failure to appropriately include relevant information from multiple sources. In relation to issue biases, he thoughtfully highlights questions around political legitimacy and representative politics in relation to how evidence is deployed to shift debates towards or away from issues/ areas of inquiry, often in a non-transparent way.
On an initial reading, this discussion about biases in Part II is interesting but not obviously essential in contributing to Parkhurst’s argument. On a closer reading, however, it becomes clear that the inherent risk of biases—and hence the need to develop a systematic response to mitigate these dynamics—provides the rationale for why, in Part III, Parkhurst advocates for a governance approach to improving how evidence is used in policy making.
Part III is the most intriguing part of this book. Although it feels somewhat abstract and leaves the reader in want of more concrete examples, I suspect this is an artefact of his principles-based approach to institutional change—one that favours ‘guided evolutionary incrementalism’ rather than programmatic planning. The strength of the concluding chapters is how they challenge policy makers and evaluators to think critically and in new ways about how institutions, within and outside of governments, operate systemically to shape the evidence that is gathered and in turn how it is used in decision making.
Parkhurst starts by looking for principles that constitute good evidence beyond the well-worn technical hierarchies, and constructs a framework of appropriateness through which policy-relevant evidence might be considered. In doing so, Parkhurst’s defines ‘appropriate evidence’ as that which speaks to multiple concerns at stake in a policy decision, is constructed in ways that are most useful to achieve policy goals, and is applicable in the local policy context. In turn, ‘good evidence for policy’ comes to be defined as evidence that meets the aforementioned appropriateness criteria, as well as high-quality research standards.
Before presenting conclusions around the good governance principles for the use of evidence, Parkhurst next turns to a discussion on the factors necessary to ensure the democratic legitimacy of the ‘evidence advisory system’. While a vital discussion to be had, I’m unsure whether Parkhurst succeeds in making a new contribution to the long-standing debates and voluminous literature on the topic of whether (or to what extent) we can accept irrationality as the cost of democracy. With this issue parked, the book concludes by outlining key principles for the good governance of evidence.
Taking a broad understanding of good governance as the ‘art’ of systems and processes through which collective decisions are made, he argues that governance needs to be thought of in terms of both the processes and outcomes that are relevant to the use of evaluation with a policy process.
To do this, eight good governance principles for using evidence in policy making are elaborated
These principles appear sensible and provide a comprehensive response to the issues/ concerns raised throughout the book. Yet I can’t help but wonder whether Parkhurst has gone far enough and fully achieved a description of principles in a governance framework around the use of evidence: to my mind, the principles seem closer to what might underpin a quality assurance checklist. Parkhurst makes a brief reference to the use of governance in the corporate sphere but does not pursue this enquiry—arguably, this might be where some of the key lessons and fresh insights lie. This governance perspective would throw into focus a slightly different set of issues, such as the delegation of decision making (e.g. around the evidence that is gathered and how it is used); the strategies for managing risks (e.g. around the lack or inappropriate use of evidence); the duties owed to those with a ‘stake’ in the evidence (e.g. to the public or potential program beneficiaries, that decisions are made in their interests); and expectations around transparency and reporting (e.g. ‘rules’ safeguarding quality information and communication flows in deliberative processes).
Overall, the strength of Parkhurst’s argument is the explicit engagement with the political questions in determining what constitutes the better use of evidence. Doing this allows him to recognise that improvement requires building institutional arrangements (however incrementally this may be) that can address forms of bias while incorporating democratic representation. Readers may still want to know more about the ‘how’, and in the Australian and especially Canadian evaluation contexts one can’t help but be reminded of growing calls for an Evaluator-General to oversight the better use of evaluation evidence. These live debates highlight the importance of Parkhurst’s book in contributing to new ways of thinking about the twin problems of evaluation utilisation and the use of reason in democratic politics.
By Consultant, Ruby Leahy Gatfield
Ever considered establishing or investing in specialist disability accommodation, but not known where to begin? In 2017, ARTD worked with Frankston Peninsula Carers (FPC) to develop a series of resources offering practical advice on how community members can form an association and partner with housing and support providers and investors and funders, including local government, to grow accommodation options. We also showcased some of FPC's success stories to show what can be achieved!
The National Disability Insurance Agency (NDIA) recognises that some people with disability who have very high support needs or significant functional impairments will require specialist housing solutions. However, the current supply of specialist disability accommodation (SDA) is substantially below what is required to meet demand. Moreover, as parents caring for adult children with disability are ageing and reaching a point where they can no longer support their children at home, the need for specialist housing options is increasing. To address this, the NDIA SDA Payments aims to stimulate investment and increase innovative supply of housing, including housing models that foster inclusion and have improved design and technology to support people with disability to live independently.
However, these Payments are not intended to meet the housing needs of all people with disability and growth in the broader market of accessible and affordable housing options is critical. But how do interested families and communities even begin growing housing options in their local community? What are the key steps to consider? How has this been done successfully in the past? Where can they turn for investment?
FPC is one inspiring example of a community-based organisation established to address the needs for SDA in Victoria’s Mornington Peninsula. The organisation is run solely by volunteers; some of whom are carers of adults with disability. Since 2007, FPC has successfully sourced over $8 million in donations and government investment to increase housing options for people with disability in their local region.
Having received funding through the NDIS Sector Development Fund in 2017, FPC engaged ARTD Consultants to develop a series of resources to support families to establish housing projects and attract the interest of potential funders and developers. After consulting FPC’s Committee and key partners we developed:
The resource package has been distributed to local members of parliament, Mayors, Ministers, housing providers, donors and families of people with disability and received positive feedback as a clear and useful tool. For more details and to download the resource package, visit the FPC website.
You can also support the work of FPC by donating to their current McCrae housing project – a well-located property designed to accommodate six people with varying disabilities. FPC has raised $35,000 so far and is is seeking to crowdsource $90,000 more to complete the property. You can read more and donate or share their crowdsourcing efforts here: https://donate.grassrootz.com/fpc/help-build-mccrae-house.
By Partner, Jade Maloney and Senior Consultant, Kerry Hart
Ever had an interviewee give monosyllabic answers or talk non-stop on an entirely different topic? What about someone who becomes overwhelmed by the conversation and perhaps even cries?
Over years of interviewing, we’ve encountered a range of challenging situations like these. Because people are people – with different values, beliefs and experiences, in different contexts – there are unfortunately not many hard and fast rules about how to respond (besides things like being authentic, respectful, and non-judgemental). This is what makes interviewing and focus group facilitation so challenging. But it’s also what makes it engaging, exciting and energising.
On April 22, the participants in our interviewing skills workshop for the Australasian Evaluation Society (AES) took turns role-playing challenging interview situations – testing out strategies to get an interview back on track and ensure interviewees feel comfortable and safe to share their views.
A reluctant participant (e.g. consistently gives one-word answers or shrugs their shoulders):
A tangential talker (e.g. needs to tell you about their key concerns before they can get into the interview questions; starts telling you about their entire career history when you ask them about their current role; or talks about all of their friendships when you asked them if they enjoyed a particular social activity):
A person who becomes emotional or distressed:
Of course, there’s a need to understand the individual and the context in applying any strategies. The value of role playing isn’t that you’ll have perfected your exact response to any given situation. What it can do is help you to develop the agility to respond authentically and appropriately to the individuals and situations you encounter.
But, as one of our participants pointed out, the other value that stepping into an interviewee’s shoes provides is the ability to see things from their perspective. Having had this experience can make you pause next time you encounter a challenging situation and think what might be going on for the person. The person is probably trying to tell you something with their behaviour and non-verbal cues. Be attuned to this and open to their perspective.
Who said they hate role plays?
We really enjoyed our day running workshops on interview skills and questionnaire design for the AES. We also run tailored workshops for organisations. You can find out more by calling 02 9373 9900.
By Maria Koleth
Last week, all of ARTD’s staff attended a challenging and informative training day on Trauma-informed Care and Practice with the Blue Knot Foundation. As a company that takes a strengths-based approach to research and evaluation with vulnerable populations, ensuring that our methods and instruments are trauma-informed is a key part of our process. The Blue Knot training renewed and updated our understanding of types of trauma, its long-term embodied effects, and the five principles of trauma-informed practice. Some key ways we are putting these principles into practice in our research include:
We would also like to thank 107 Projects for hosting us – their wonderful garden and sense of hospitality provided the most recuperative setting for our training day (see 107.org.au).
By Ruby Leahy Gatfield
My recent internship at the International Institute for Democracy and Electoral Assistance (International IDEA) in Stockholm raised a number of important questions for me about how to monitor and evaluate international development programs. Trying to demonstrate a program’s ‘impact’ can often feel overwhelming, given that development goals are long-term and complex processes dependent on a myriad of political, economic, cultural and other factors. And while the methods for measuring, monitoring and evaluating impact remain hotly contested in the development community, two key approaches stood out in my time at International IDEA.
The first is the logical framework approach. Anyone involved in planning, monitoring or evaluating international development programs will be familiar with ‘logframes’. Pioneered in the 1970s by USAID to demonstrate what donor money was achieving, logframes have proved a very useful tool for mapping out and thinking critically about how a program leads to results.
Logframes provide a line of sight on the causal links between a program’s activities, outputs, outcomes and ultimate impacts. They offer a sense of simplicity and structure in an otherwise complex environment, can be used to communicate intentions to stakeholders, enable standardised reporting on indicators, and allow independent monitoring and evaluation of results (among other benefits).
But what do outcomes really look like? How do the activities and outputs of a program lead to development impacts? To unpack this ‘black box’ in the logframe, outcome mapping (OM) has emerged as an increasingly popular methodology.
What is outcome mapping?
OM recognises that development programs are all about social change, and that social change is complex, unstable, non-linear, two-way, incremental, cumulative and often, beyond our control. Conducting evaluations in these open and changing environments poses a myriad of challenges, from defining success and deciding when to evaluate, to capturing emerging objectives and establishing cause and effect.
To tackle these challenges, OM provides a framework for unpacking a program’s theory of change and collecting data on outcomes as they unfold. Importantly, it redefines ‘outcomes’ as changes in behaviour—the actions, activities and relationships—of the stakeholders directly in contact with the program (known as ‘boundary partners’).  This concept of boundary partners is fundamental to OM but not always present in logframes and, as a result, the two approaches often produce very different outcome statements. According to OM, behavioural change of boundary partners is critical to moving up the results chain.
OM also recognises that, in reality, programs have limited control over whether their ultimate goal is achieved, given the range of social, political, environmental, economic and other factors that support or hinder intended outcomes. Rather than claiming attribution of a development impact, OM claims contribution to it. It teaches that programs have control over their inputs, activities and outputs; influence over their outcomes; and simply an interest in the ultimate impact. In short, OM focuses on a program’s sphere of influence.
In practice, OM offers 12 tools for planning, monitoring and evaluating outcomes, which can be adapted to suit individual contexts. These tools are intended to help stakeholders identify and think critically about:
It helps to build a credible picture of how a program contributes to results, putting people at the centre of development and recognising the complex and non-linear nature of social change.
So while the logframe approach remains engrained in most development agencies, practitioners should consider the value in an OM approach. As put by Michael Quinn Patton, OM affirms that ‘being attentive along the journey is as important as, and critical to, arriving at the destination’.
To learn more about OM, visit the Outcome Mapping Learning Community, a one-stop shop for all things OM.
By Manager, Katherine Rich
Why do economic arguments hold sway in public debate? I recently attended the thought provoking Melbourne School of Government’s conference: A Crisis of Expertise? Legitimacy and the Challenge of Policymaking. In a panel discussion, Economist Richard Denniss spoke about the disproportionate weighting given to economic evidence and its persuasive power in public debate. It got me thinking about why this is so.
In their simplest form, economic arguments appear easy to understand and are compelling. To illustrate this, Denniss related a story of his son asking him if he would take him to Disneyland. When Dad said ‘no’ and used an economic argument – ‘it’s too expensive, we can’t afford it’ – his son innately understood and, for the most part, accepted the decision. However, as Denniss pointed out, the concept of Disneyland not being affordable is really a value judgment rather than an objective fact. The reason Denniss’ family didn’t go to Disneyland was because they had other priorities to spend their money on.
Economic arguments, such as those made through the use of cost-benefit analyses (CBAs), can seem objective and easy to understand even though they are not – with values concealed behind a veneer of expertise and a language that not everyone understands. I agree with Denniss’ suggestion that rather than pretending a cost-benefit analysis was value neutral, advocates of particular causes should start from their value position and then make an economic case for their argument.
So what does all of this mean for evaluators when evaluative arguments are complex and can be difficult for non-evaluators to follow? We could leverage some of the same power of economic argument – make our evaluative judgments appear value neutral. However, in trying to make a holistic judgement about the merit and worth of a program, it would be problematic to use only one quantifiable metric like a cost-benefit ratio.
What we can do is bring together diverse stakeholders to first understand their perspectives and then develop a comprehensive set of criteria to assess value (see my recent post on this – a balanced approach to valuing in evaluation) and develop a logic model to clearly communicate what success will look like and how it will be measured. We can strengthen the persuasive power of these models, by drawing on social science research to develop and refine them.
When it comes to making economic arguments in evaluation, we can also look more broadly than cost-benefit analysis. Julian King’s recent publication OPM’s approach to assessing Value for Money sets out an approach to measuring value for money (VfM) that goes beyond using blunt, readily quantifiable measures like CBA, and acknowledges that some of the most valuable outcomes can be the hardest to quantify. It claims that good VfM assessments are clear about the value judgments being made. The approach uses an equity lens to capture not only the economy, effectiveness, cost effectiveness and efficiency of an intervention but reach to those most disadvantaged, acknowledging this may be costlier than reaching moderately disadvantaged people but can have greater impact.
Economic arguments have power not because they are objective, but because they appear value neutral. As evaluators we can advocate for greater transparency of economic metrics and more nuanced approaches to VfM, and we can be explicit about how stakeholder values influence criteria and, thus, evaluative judgements.
By Jade Maloney
The idea that there might be a crisis of expertise in policymaking – a questioning of the role and legitimacy of expertise – is challenging for a public policy consultant. But, for an evaluator, it’s a given that evaluative evidence is only one piece in the policymaking puzzle. We might want it to have more weight, but we know that it must work in the context of politics and the democratic process.
So it was interesting to hear the various takes on the theme at Melbourne School of Government’s recent conference: A Crisis of Expertise? Legitimacy and the Challenge of Policymaking.
Keynote Professor Sheila Jasanoff kicked off day one by calling into question the ‘deficit model of the public’ in the context of the rise of alternative facts. Lay people can evaluate complex information and have their own knowledge that should be valued; we need to find ways to engage them in the democratic and policymaking process. To get to the point where we can imagine alternatives, we also need to acknowledge power structures, bridge traditional binaries and speak across disciplines.
Several speakers at the conference recognised co-design as one of a suite of tools to engage citizens in policymaking processes. I presented on our growing use of this approach to help ensure policies and programs better address core problems by engaging end users in deep consideration of the problem and an iterative process of prototyping, testing and refining solutions.
At this point you may be asking where the ‘traditional’ experts are in this process. We’d say co-design does not represent the rejection of expertise but the reimagining, repositioning and redistribution of expertise. If done well, it can help to address the problems, which Darrin Durant raised, of defining one type of expertise as bad and another closing of participation by technical fiat.
In a co-design process, end users are recognised as having their own expertise to bring to the table. Experts, in the traditional sense of the term, can be involved in the design process and help to refine the model based on evidence. Practitioners – whose own knowledge has sometimes been negated in the academic literature as Brian Head pointed out – can also contribute their practical knowledge of what’s needed and what works and what doesn’t.
This may be best illustrated by a case example. In a recent project with Amaze, the autism peak body in Victoria, we used a modified co-design approach to bring key stakeholders together to iteratively develop a strategy to improve educational and social outcomes for students with autism in the school system. This is certainly one of the complex issues about which, as Col Wight noted, there are always multiple perspectives. A co-design approach enabled us to recognise that and start to build a shared understanding among a group that included people with autism who provide peer support in schools, teacher and principal representatives, support staff and peak organisations.
We began by developing a root cause analysis. This is an analytical tool to identify the causal pathways that lead to a specific problem. The aim is to work back along each causal pathway toward the ‘root causes’ of the problem, so that these can be addressed. Sounds like a technical process, which one of my audience members pointed out, but actually it begins with a whiteboard and marker and a conversation – asking individuals what they know about what underlies the issues they see.
To make sure we captured the range of perspectives on the system, we began with individual interviews with each stakeholder to draw their own map. We supplemented this with a scan of the literature and a review of the student experiences in the school system – identified in the Victorian Parliamentary Inquiry into services for people with Autism Spectrum Disorder. We then combined individual maps into a comprehensive map of the causal pathways to the problem and refined this iteratively with stakeholders through two workshops. Through discussion in a small group, stakeholders were able to understand each other’s legitimate perspectives.
Once we reached this shared understanding, stakeholders used the map to identify key points at which to intervene and the priority elements of a strategy to holistically address the problem.
From there, we worked together to develop a logic model and evaluation framework for the strategy. Again, these are technical concepts, but they can be cracked open through capacity building workshops, and doing so can build a shared understanding of what is being done and why.
In other projects, we and our clients are using co-design with people with dementia and with people who have a personal experience of anxiety, depression or suicide, or support someone who does.
Co-design might not suit every situation – and certainly not ones in which there is a predefined model – but we believe it has a lot of potential to enable participants to understand each other’s truths, break down binary thinking and collaboratively build solutions.
Thanks to the Melbourne School of Government for a thought provoking few days.
By Melanie Darvodelsky
We love partnering with people who share our passion for supporting positive change. So we’re excited to be partnering with Jax Wechsler from Sticky Design Studios and Amelia Loye from engage2 in our evaluation of beyondblue’s blueVoices program, which brings together people who have a personal experience of anxiety, depression or suicide, or support someone who does, to inform policies and practice.
Marrying design, engagement and evaluation expertise will enable us to provide not only evaluation findings, but a clear direction for the future, which is backed by both the organisation and blueVoices members and supports our commitment to utilisation-focused evaluation.
As Jax explained at a workshop with our Sydney staff, co-design is not just running a stakeholder workshop. Design is iterative. It involves prototyping, testing and refining. Co-design is an approach to design that actively identifies and addresses the needs of all stakeholders in the process to help support an end product that is useful across the board.
When designing services, if you skip the vital step of conducting research to understand the world from the end-user’s perspective, what you come up with may be inappropriate and not deliver the possible value it could.
Additionally, service design does not stop in the way that product design does. Implementation is ongoing and involves many people working together over periods of time. An idea for a tool that meets staff needs at the beginning of a project may no longer be useful even by the time the tool is fully developed, as both the project and staff involved may have moved on. So designers need to think about how their work can support an ongoing change process, if they want to make sustainable impact.
Through her research and project experiences, Jax has found that designers can support lasting change in contexts of innovation through ‘artefacts’ – visual representations and models. These include personas, journey maps, infographics, flow charts and videos. Artefacts act in a ‘scaffolding’ role for a program or organisation, for example, by persuading staff about why a change is needed, facilitating empathy between stakeholder groups, and providing a tool for sense-making. Artefacts – as ‘boundary objects’ – can also support staff from different disciplines to bridge the different languages they speak and collaborate, empowering them to co-deliver change.
You can read and watch more about Jax on her website or come to Social Design Sydney on Monday, 5 March 2018 from 6:00 pm to 8:30 pm in Ultimo to discuss whether co-design is the silver bullet we hope for. Register here.
By Partner, Jade Maloney, and Consultant, Maria Koleth
Interviews and focus groups allow you to gather in-depth data on people’s experiences and understand what underlies the patterns in quantitative data. However, handling dominant voices and opening up space for divergent views and quiet types in focus groups can pose challenges for even experienced researchers. Recently, Partner, Jade Maloney, facilitated a workshop with researchers from the Australian Human Rights Commission to reflect on their practice and stretch their skills through scenario-based activities.
Here are our top five tips for successful interviews and focus groups:
Want to learn more? Speak to us about out interviewing skills workshops on 9373 9900
By Jade Maloney, Partner
For several years now, I’ve been getting more and more involved in service design, review and reconceptualization to respond to evolutions in the evidence base and the systems within which services operate. And, when I am designing an evaluation framework and strategy or conducting an evaluation, I tend not to be looking at programs, but at services that are operating within larger ecosystems, aiming to complement and to change other aspects of these systems in order to better support individuals and communities.
This isn’t surprising given that I am working in the Australian disability sector, which is currently undergoing significant transformation in the transition to the National Disability Insurance Scheme (NDIS). Programs are giving way to individualised funding plans that provide people with reasonable and necessary supports to achieve their goals. The future is person- rather than program-centred.
When designing and reconceptualising services in this context, it has been more feasible and appropriate to identify guiding principles, grounded in evidence, rather than prescriptive service models or 'best practice'.
But what happens when evaluating in this context, given that evaluation has traditionally been based around programs?
Fortunately, well-known evaluation theorist Michael Quinn Patton has been thinking this through. Evaluators, he has realised, are now often confronted with interventions into complex adaptive systems and principle driven approaches, rather than programs with clear and measurable goals. In this context, a principles-focused evaluation approach may be appropriate.
As Patton explained in a recent webinar for the Tamarack Institute, principles-focused evaluation is an outgrowth of developmental evaluation, which he conceived as an approach to evaluating social interventions in complex and adaptive systems.
In a principles-focused evaluation, principles become the evaluand. Evaluators consider whether the identified principle/s are meaningful to the people they are supposed to guide, adhered to in practice, and support desired results.
These are important questions because the way some principles are constructed means they fail to provide clear guidance for behaviour, and because there can be a gap between rhetoric and reality. Patton has established the GUIDE framework so evaluators can determine whether identified principles provide meaningful guidance (G) and are useful (U), inspiring (I), developmentally adaptable (D), and evaluable (E).
I’m now looking forward to reading the books, so I can start using this approach more explicitly in my practice.
By Jade Maloney, Partner
I reckon the right time to make resolutions isn't amidst the buzz of New Year's Eve, but when the fireworks are a dim echo.
So here goes. This year, I'm committing to championing and building capacity for evaluative thinking.
If we're to believe the hype that we're living in a post truth world, this may seem like a lost cause. But while many people source their information through the echo chambers of social media, we can take comfort that the Orwellian concept of alternative facts hasn't caught on.
Also in our work in evaluation, we come across plenty of organisations and stakeholders with a commitment to collecting, reviewing and making decisions based on evidence. While there is often a gap between rhetoric and practice, evidence based (or at least evidence informed) policy is engrained in the lexicon of Western democracies.
The trouble is that evidence informed decision making can seem out of reach if evaluation is presented, in difficult to decipher jargon, as the remit of independent experts. (Of course, this is not the only trouble. In some cases it is that the commitment to evidence and evaluation is symbolic—to give an impression of legitimacy—but that's not the situation I'm thinking of here or one that I come across very often).
This is not to say that there is not real expertise involved in evaluation. But if we can't translate this into language and ways of working that all stakeholders can understand, and then bring them along on the journey, evaluators will be speaking into their own echo chamber.
And—as is clear from the literature on evaluation use (including my own study with Australasian Evaluation Society members)—if we don't involve stakeholders throughout an evaluation, then it's unlikely to be used either instrumentally or conceptually.
Focusing on building capacity to think evaluatively (rather than just capacity for evaluation) can help put informed decision making within reach.
This focus fits with the concept of process use (see Schwandt, 2015), which evidence shows can be linked to direct instrumental use of evaluation. It also supports sustainable outcomes from interactions between evaluators and stakeholders.
But what does building capacity for evaluative thinking mean in practice? For me, it means not only focusing on the task of the evaluation at hand or building capacity for evaluation activities, such as developing program logics and outcomes frameworks, but on engaging stakeholders in the practice of critical thinking that underlies evaluation.
As Schwandt (2015) describes it, critical thinking is a cognitive process as well as a set of dispositions, including being 'inquisitive, self-informed, trustful of reason, open- and fair-minded, flexible, honest in facing personal biases, willing to reconsider, diligent in pursuing relevant information, and persistent in seeking results that are as precise as the subject and circumstances of inquiry permit.' And its key application in evaluative practice is in weighing up the evidence and making value judgements.
We can crack open this process by engaging stakeholders in it. We can also translate the process into an equivalent in everyday life (for example, using value criteria, such as price convenience, quality and ambience, to make a reasoned choice between different restaurants). This might even help people to understand how others come to different conclusions based on different value criteria.
The more often this happens, the less we may need to worry about echo chambers.