News & Blog

Evaluation for the public good or as a public good?

Aricle Image for Evaluation for the public good or as a public good?

November 2017

By Ken Fullerton and Jade Maloney

We’ve been thinking a lot about how evaluation can support the public good since Sandra Mathison kick-started the Australasian Evaluation Society (AES) International Evaluation Conference in Canberra by telling us that evaluation is not delivering on this promise because it is constrained by dominant ideologies, is a service for those with resources, and works in closed systems that tend to maintain the status quo. The Presidential Strand of the American Evaluation Association Conference helped us to identify some ways that evaluation can support public good.

If evaluation is built in upfront and asks the right questions (not only how is this program working, but how does it compare to alternatives?), it has the potential to support the public good. It can be used to identify improvements to a program’s structure and implementation that support better outcomes, inform decision-making about whether a program should be expanded to benefit new communities or be discontinued, so that resources can be allocated to other public programs that are achieving a greater impact.

For example, an evaluation of an air pollution reduction initiative might inform adjustments in program delivery that result in better outcomes from which everyone stands to benefit. According to the World Health Organization (WHO) “Outdoor air pollution is a major environmental health problem affecting everyone in developed and developing countries alike” and reductions in air pollution “can reduce the burden of disease from stroke, heart disease, lung cancer, and both chronic and acute respiratory diseases, including asthma.” This could, in turn, have other positive flow-on effects, such as reallocation of expenditure savings to other beneficial programs.

However, evaluation can only support the public good if it is useful and used. A recent study by Maloney, entitled Evaluation: what’s the use? (Evaluation Journal of Australia, in press), indicates that AES members perceive non-use of evaluations as a significant problem in the region. This finding is consistent with the broader literature from North America and Europe, which suggests that many evaluation reports are sitting on shelves gathering dust instead of being used for (public) good.

Then there’s the question of whether evaluation can be considered a public good in and of itself. (The AES Conference debate on whether we should think of evaluation in terms of capital didn’t settle this for us, as amusing as the comparisons between evaluations and washing machines were).

To get technical, a public good is one that is both non-excludable and non-rivalrous. This means that no individual can be excluded from using that good and use by one individual does not reduce the availability of the good to others. Fresh air and street lighting are common examples of public goods.

If an evaluation report identifies broad learnings about supporting a particular target group or addressing a certain policy problem, it can be used by multiple organisations. And one organisation using an evaluation report does not prevent another organisation from also using the insights to inform their work.  

The hitch comes on the ‘non-excludable’ criteria. Commissioning organisations often don’t publicly release evaluation reports, which limits the capacity of other organisations to benefit from the insights gained into what works and how and, thus, the potential of evaluation to be a ‘public good’.  Evaluators interviewed by Maloney identified the lack of sharing of evaluation findings as a barrier to the broader use of evaluation.

In recent years, across Australia, there has been a trend among government agencies to release more evaluation reports to the public. This increased transparency may enable evaluation to be a public good as it means researchers can access a more fulsome range of evidence about program models in action and, in the case of realist evaluation, learn more about what works for whom in what circumstances and how. 

On the flipside, as identified by some evaluators in Maloney's research, there's a need to ensure that the push to publication doesn't affect willingness to have open discussions about things that have not worked as intended because this would limit the capacity of evaluation to support improvements for good. And, when reports are not published, government agencies and evaluators could consider what learnings can be shared through conferences and online discussions. In this way, we can all support evaluation to live up to its potential.

 


Learnings from the virtual American Evaluation Association Conference

Aricle Image for Learnings from the virtual American Evaluation Association Conference

November 2017

By Jade Maloney


There were plenty of interesting views in the virtual stream of the 2017 American Evaluation Association Conference. The Presidential Strand sessions focused on turning learning into action, and finding solutions to challenges, to create greater good for society. 

Continuing the Dialog Evaluation’s Call to Action, 21st Century Perspectives in Addressing Race. Nicole Bowman: Evaluators need to understand the origins of the 'authority structures' they operate in. We don't own knowledge; we're caretakers of it. Leon Caldwell: Evaluators should be all social justice minded. Jara Dean-Coffey: Evaluators should stop hiding behind the notion of objectivity. You can have an end in mind, but still be rigorous.

Learning to Swim Against the Main-stream: Learning from Feminism. Donna Podems: Feminist evaluation isn't about advocacy. It's about upholding values, treating people with respect, and asking difficult questions. It deepens understanding and insights. It has friends in culturally competent evaluation, participatory evaluation and systems thinking. Michael Quinn Patton: Evaluation is a political activity. Evaluators need to acknowledge and account for that. The notion of objectivity creates distance, not trust and context-based understandings. We need to recognise that there are multiple ways of knowing and that some have been privileged over others. We should aim to make knowledge a resource for the people who create, hold and share it. (For me this has echoes of Sandra Mathison's kicker keynote at the 2017 Australasian Evaluation Society International Evaluation Conference, in particular her call to 'speak truth to the powerless'.)

Developing and Sustaining Communities of Practice. Leah Neubauer et al: Enablers include intentionality and shared purpose, clear values, creation of a rhythm, combination of familiarity and excitement, allowance for different levels of participation, creation of a safe space to ask questions and leadership commitment. What defines a successful Community of Practice is context based; but it needs to provide value to members. Oh and you can quickly create an online learning community with keen participants who are willing to share. 

Overcoming Barriers to Building a Dynamic Evaluation-Informed Learning Culture in Philanthropy. Julia Coffman: There are many potential barriers, including lack of a learning culture, integrated processes and tools geared for learning. But cases show how barriers have been overcome. Debra Joy Perez: Take the opportunity to learn from failures. Veronica Olazabal: When using short-term adaptive review cycles to support learning, keep an eye on how you will measure longer-term impact. Huilan Krenn: Ask who learns, for whom, about whom and with whom?

The key take-away for me? You cannot be too keenly aware of the role of value, values and valuing in evaluation. 


Capitalising on the (public) good of evaluation

Aricle Image for Capitalising on the (public) good of evaluation

November 2017

By Alexandra Ellinson                 

It’s been about two months since the Australasian Evaluation Society (AES) International Evaluation Conference in Canberra. I thought I’d share some of the things that have stuck in my mind since – the ideas that have had a lasting resonance, not necessarily what leapt out to me at the time.

But first, let me set the scene. I hopped on the train down to Canberra – something which I’d not done before. It proved to be a great way to travel – plenty of room to walk about and the space made it easy to get quite a bit of work done! I’ve quickly become quite the fan. The reason I share this with you is that it got me thinking – what an underutilised (quasi) public good the train is. At $60 for a first-class ticket one could almost say it’s ‘the people’s train!’. And yet there were only a handful of other folk on board.

In a way that I hadn’t anticipated, this idea of the public good – or more specifically, of making better use of evaluation as a public good – seemed to be motivating many of the conference presentations. But maybe I should have seen this coming – the public good is a fitting concept for a conference located in Canberra and well attended by public servants.

Referring back to the conference theme ‘capital’, there was a current of concern underlying many presentations about what is needed to better realise the worth of evaluation (often paid for by the tax payer) i.e. to make the knowledge evaluation generates something that is better integrated into decision-making and accountability mechanisms of government, so that its benefits are, ultimately, made social. Let me give you some examples.

  • Multiple papers covered challenges related to strengthening the internal evaluation capabilities of government agencies, both in terms of capacity building of staff and creating evaluation systems.
  • Evaluation: what’s the use? (Jade Maloney): lessons from AES survey/ interviews about supply and demand side factors conducive to evaluation use.
  • Evaluation and the enhanced Commonwealth performance framework (Brad Cook & David Morton): the framework provides an opportunity for evaluation to inform an understanding of agency performance, not simply discrete programs, which may lead to more strategic evaluations rather than just more evaluations. 
  • Why Australia needs an Evaluator General (Nicholas Gruen): this would be a mechanism to bring ‘thicker’ program insights and practical wisdom into overly ‘thin’ policy conversations, reporting to the Minister (i.e. a way of integrating evaluation findings into policy making) and to Parliament (i.e. a way of bolstering the independence of evaluation and its accountability to the people).

One could say that the underutilisation of the train to Canberra – owing not so much to its quality, but more to a lack of awareness and sub-optimal integration with the wider public transport network – proved somewhat of an analogy to the problem of evaluation use.

This, however, is not an analogy that is easy to push much further (quit while ahead!) without seeming to imply rather unflattering associations between evaluators and out-of-date infrastructure, which would be neither fair nor accurate….

Indeed, the other thing that stood out at the conference is the strength of the evaluation community. In the spirit of heathy debate, there were different takes on how best to build evaluation capital. According to keynote speaker Sandra Mathison, independence from not integration with systems of government is key to the public good of evaluation. In my opinion, it’s likely that there is a considered and circumspect middle ground that will both strengthen evaluation as a public good and support evaluation use by government.

This leads me to a final thought – that one could frame the notion of the public good of evaluation another way. One might think about promoting and cultivating the practice of evaluative thinking, not just in evaluations but in everyday life. We would all benefit from being part of a more reflective and perceptive world in which we apply critical reasoning to the values and assumptions attached to our judgements.

This post is an adaption of a presentation given to the NSW AES ‘Conference Highlights’ Seminar on 2 November 2017.


Uncovering the value of youth peer support

Aricle Image for Uncovering the value of youth peer support

November 2017

By Melanie Darvodelsky and Andrew Hawkins

ARTD has been working closely with Youth Insearch to better articulate the theories of change that underpin their activities and conduct rigorous outcome measurement. Youth Insearch runs an early intervention program of counselling, support, mentoring and empowerment for at risk young people aged 14-20 delivered through weekend workshops, support groups, peer support, leadership and individual care.

Youth Insearch was confident, based on their long history and anecdotal evidence, that their program was building young people’s capacity for leadership and empowering them to deal with their own challenges. What they needed was to gather independent evidence to explain how this happens and to measure outcomes.

Our first step was to watch the Youth Insearch program in action at a weekend workshop in Toukely, NSW and speak with staff, volunteers and participants about how they think the program works. We used staff with counselling experience and brought in a clinical psychologist.

Since then, we’ve run two workshops with staff and volunteers from all three states where Youth Insearch operates to generate and refine a logic model, a theory of change and an outcomes framework for the program. In the workshops we translated the concept of a program logic into real-world terms. Staff and volunteers had the chance to talk through how their activities support young people to feel a sense of belonging and to recognise and capitalise on their strengths and how this, in turn, supports longer-term outcomes, such as improved mental health, housing or employment. There were lots of laughs and some friendly competition as groups shared their competing theories of change.

It’s great to work with an NGO that is so engaged in the evaluation process, and so passionate about better outcomes for Australian young people.


Making realist evaluation work

Aricle Image for Making realist evaluation work

November 2017

By Andrew Hawkins, Sue Leahy, Fiona Christian, Melanie Darvodelsky

Do you want to know what happened or what works? What works on average or what works, for whom, under what circumstances, and how?

Realist evaluation moves beyond simplistic notions of programs as either working or not. It takes a more scientific approach – aiming to understand the mechanisms within programs that have casual power, and the situations in which these are active to generate (demi) regular outcomes. 

ARTD has a commitment to realist evaluation, recognising that it can help government agencies and non-government organisations to better design and target interventions and identify the mechanisms that are crucial in any further rollout of a program. This is why we sponsored and attended the 2017 International Conference for Realist Research, Evaluation and Synthesis held in Brisbane last week.

There were keynote speakers from the pioneers of realist evaluation – Nick Tilley and Ray Pawson – as well as presentations and workshops Australian realist thinkers Gill Westhorp and our own Director Andrew Hawkins. As conference attendees ranged from experts to novices, there was a strong focus on how a realist approach could be applied in the real world.

So, what were the key take-outs?

  • See a program as a theory and use program theory rather than programs as the unit of analysis in evaluation. Understand that theory testing is about theory adjudication – that is, deciding between plausible explanations. Operate at the level of big ideas. Better explore program and policy history. Avoid the formulaic and focus on intellectual craft (Ray Pawson). 
  • The purpose of evaluation is to be useful – ensure that evaluation can play a role in decision-making and improve your expertise for that purpose (Nick Tilley).
  • You don’t need to produce a realist evaluation report that is full of jargon to make use of realist principles to better explain, maximise and replicate program outcomes. You just need to think more deeply about how change occurs in social settings (Andrew Hawkins).
  • Don’t get lost in finding the context for a realist evaluation – you only need to understand what context interacts with the mechanisms to produce outcomes (Leslie Johnson).
  • One diagram often won’t be enough to represent context-mechanism-outcome configurations (Patricia Rogers).NVivo can be used for more than thematic analysis. It can be used for realist analysis by coding whole theories (Sonia Dalkin, Northumbria University).
  • It’s important that procurement processes enable adaptive, emergent evaluation (Penny Hawkins).RBA and Realism can be combined, as long as data collection is set up to capture the purpose of both approaches, and evaluators can be flexible in their work (Patrick Maher and Bronny Walsh).
  • Realist evaluation can align with best practice in working with Indigenous people and communities.

As realist theory can be heavy going and the jargon difficult for the uninitiated, we see a key challenge for realists as translating the concepts into the concrete and convincing evaluation funders that realism can help them in practical ways. After seeing the ‘lightbulb’ moments among conference participants, we’re confident that this can happen. Let’s continue the conversation between now and the next conference.


Building capacity for evaluation starts with democratising it

Aricle Image for Building capacity for evaluation starts with democratising it

October 2017

By Senior Consultant Jane Ford

Evaluation needs to build capacity for “evaluative thinking” beyond its professional and academic borders and to communicate better. These were the key take outs from the 2017 Australasian Evaluation Society (AES) International Evaluation Conference for South Australian members.

Tension exists between the desire to ‘professionalise’ evaluation and the need to encourage managers to understand and engage with evaluation as part of their core business. We need to build capacity for evaluative thinking among policy makers and program managers so they can reap the benefits from design through implementation and refinement. This means that while evaluation needs a strong theoretical grounding, evaluators must distil ‘evaluation speak’ into plain, meaningful language for decision-makers.

In a post-truth world, where the currency of facts and rational argument has been devalued, we need communications strategies that connect emotionally with audiences and are tailored to their time constraints. Evaluators need to be storytellers and might benefit from using the pyramid principle of reporting. For example, Board Members and Senior Executives need high-level information to inform strategy. Program managers need information about program targets and budgets to inform tactical decisions.  Frontline staff need more detailed data translated for practice. Reports can be layered to meet these needs.

To gain traction, evaluation also needs to become better at engaging people. Co-design principles and an Indigenous perspective on evaluation recognise that people are experts in their own experience and they have an important voice in designing and evaluating the services they use.

The gathering of South Australian AES members agreed it would be useful to develop a community of practice to help spread the word about the benefits of evaluative thinking in developing policies and programs that work for people.  Social media might also help us broaden the dialogue about evaluation.

Looking forward to the next AES seminar…In the meantime, connect with us on Twitter or LinkedIn.


Making realist evaluations useful for government

Aricle Image for Making realist evaluations useful for government

October 2017

By Director Andrew Hawkins

What is it about an intervention that generates change? What conditions are required for it to be effective? How can we get more of the intended and less of the unintended outcomes?

If you have ever wanted to know the answer to these questions, you should be thinking about realist evaluation. Realism brings a sophisticated approach to understanding how programs 'work'. Realists recognise it is how people interpret program activities rather than the activities themselves that generate change. That is, programs don’t emit a constant causal force – they work by providing resources or opportunities for people to reason, make decisions and behave differently (see image courtesy of Gill Westhorp). Crucially, different people will respond differently. Some will benefit; some will not; and some may be harmed.

Realist evaluation is about identifying the ‘mechanisms’ that make an intervention work and the ‘contexts’ in which these generate ‘outcomes’. For example, CCTV cameras might deter potential offenders or, conversely, attract them by signalling ‘rich pickings’. It might help catch offenders in the act, and lead to more successful prosecutions with video footage (which might reduce crime if it’s a small number of offenders, or have no impact if many others take their place). It might lead people to be more willing to park their car because they think it will be safer, or less willing because they think CCTV signals a risk, or that it invades their privacy. People may even become complacent and leave more valuables in their car, which could lead to increased crime through increased opportunity. These are all different ‘mechanisms’ by which CCTV might work, or fail to work, with different people in different circumstances. Additionally, in Sydney, people may just take the first space they can, regardless of CCTV presence, but in a country town, there may be more scope for decision making. The case of CCTV demonstrates that knowing what outcomes were achieved on average does not tell you much about what the best approach will be in any given car park.

So how can realist evaluation answer real world questions for policy makers?

  • Work out who to target and how to modify an intervention to maximise overall outcomes: Realist evaluation can provide sophisticated advice about which people to target with an intervention, or how to modify the intervention to enhance future performance. You may need to think about limiting access to those that really benefit (and avoid people who may be harmed), or move from a one-size-fits all approach to designing a few options that work for a greater range of people.
  • Identify what matters for future implementation: A realist analysis identifies the mechanisms that generate outcomes. This makes realist evaluation robust and means the findings from one evaluation can be transferred to another without blind adherence to program fidelity. For example, instead of finding that a pizza night leads to social bonding, it can identify that what you need is the ‘shared meal in an informal setting between mentors and mentees’ because it is that, and not the pizza, that leads to social bonding.
  • Understand and benefit from failure: When a program appears to be failing, realist evaluation can help you understand why and help you find the ‘silver lining’ (i.e. the conditions under which certain components of the program do work). Realist evaluation is particularly useful for pinpointing why particular programs do not always work because of its attention to the contexts in which, and target groups for whom, the program activities actually fire mechanisms that generate change. In an age of innovation – of ‘fail fast and fail quickly’ – realist evaluation can provide a scientific approach to experimentation that is not simply testing good ideas, but ideas based on proper theories with a greater chance of success.
  • Ensure a scientific approach to evidence-based policy: Science is in large part about developing and testing theories about how the world works. Scientific evaluation is about understanding the value of interventions into the world. Realists know a method does not make an approach scientific. Premature experimentation using Randomised Control Trials (RCTs) without a sufficient theory as to how and why a program should work is simply unscientific. It might tell you what happened, but it won’t tell you why, which parts were most and least valuable, and, crucially, what to do next in a future time or place to maximise the intended outcomes. Realists have a scientific approach – they identify middle range theories – not so specific as to be unique, but not so abstract as to be vague and unusable.
  • Design new programs: Drawing on literature, experience and a well-defined problem realist evaluation can identify the mechanisms that might need to be fired for change. For example, when researching a program on nursing home visits, it can be hard to know how many visits are needed.  Meta-analysis will usually be inconclusive or contradictory. But once the mechanism of ‘rapport’ is found to be the cause of change, then this outcome, rather than an output, can be the focus of planning and implementation.

 You don’t need to produce a realist evaluation report that is full of all the realist jargon to make use of realist principles to better explain, replicate and maximise program outcomes. You just need to think more deeply about how change occurs in social settings.

 


#BeTheChange2017 for dementia

Aricle Image for #BeTheChange2017 for dementia

October 2017

By Director, Jade Maloney, and Consultant, Melanie Darvodelsky 

There are more than 410,000 Australians living with dementia; about 26,000 of whom have younger onset dementia (a diagnosis under the age of 65). By 2025, this is expected to increase to over 530,000. Dementia is the second leading cause of death in Australia and the greatest cause of disability in older Australians.

This is clearly one of the significant policy problems for Australia and the world today – both in terms of finding a cure and creating the systems and the culture that enable people with dementia to live well.

The National Dementia Conference set the challenge to be the change now. The opening keynote Christine Bryden (an author and dementia advocate) called on everyone to be the change in our homes, communities, workplaces and industries.

How? The key learnings from the conference were that listening to people living with dementia, co-designing service models, increasing choice and control, enabling rather than doing for, and harnessing technology are clearly the way forward.

Speakers living with dementia had a strong voice at the conference. Kate Swaffer (CEO and co-founder of Dementia Alliance International) kicked off Day 2 by crushing the misleading myths that people with dementia are ‘empty shells’, can’t communicate and can’t live positively. She showed how people with dementia are driving change around world. Trevor Crosby spoke about living well through acceptance, early diagnosis, positive thinking and getting on with life. While Brace Bateman brought a peashooter to the stage to demonstrate finding ways to be joyful and spoke of the positive reception he had when disclosing his diagnosis.

However, as one audience member with dementia shared, not everyone finds acceptance. Dementia Australia surveys reflect this.

The language we use matters. Both Dennis Frost (an advocate living with dementia) and Dr Sam Davis identified the language around dementia that marginalises and minimises the views of people living with dementia.

On Day 2 audience members with dementia set change in motion in real time with the introduction of a person with dementia to the panel on rights, risk and autonomy. The panel identified a need for a different way of thinking about duty of care and dignity of risk – recognising that duty of care includes listening to and respecting the person with dementia not only risk management. As Madeline Gall (CEO of Lifeview) said, duty of care doesn't mean wrapping a person with dementia in cotton wool the minute they walk into a residence. Providers should understand the individual's risk appetite and respond.

The need to find ways to increase choice also came through in Dr Craig Sinclair’s (University of Western Australia) session on substitute or supported decision-making. He noted that there has been much legal and ethical debate but that this has been missing the voice of people with dementia and that limited practical tools have been developed to support decision-making. Research with people with dementia identified the desire to maintain as much choice as possible, but that decisions are also made in a relational context, with family members.

Carers of people living with dementia spoke about the barriers they experience in supporting their loved ones to live with dignity, and as independently as possible for as long as possible. They called for attitudinal change to enable increased independence and quality care.

Creating Dementia Friendly Communities is one way to be the change as Kate Swaffer and Dr Kathleen Doherty identified. Dementia friendly is “people friendly” and can have benefits for the broader community.

We're proud to be working with Dementia Australia on a developmental evaluation of the national Dementia Friendly Communities program. The program builds on the evidence from existing dementia friendly community initiatives and has been co-designed with people with dementia. It encourages people to register as a Dementia Friend through the online Hub at www.dementiafriendly.org.au. The Hub includes:

  1. A Resource Hub for information on what a dementia friendly community is, how different members of the community can make practical changes that improve the lives of people with dementia, and showcases dementia friendly communities in action.
  2. The Learning Centre for free online training to raise awareness of the difficulties faced by people living with dementia and how you as a dementia friend can assist.
  3. The Online Community for a place to connect and share challenges and successes for individuals and communities across Australia. 

We believe evaluation can help be the change if it truly involves people with dementia. As several conference presentations identified, evaluation can help to identify what works, potential improvements and guide future implementation.


How program logic can work for NGOs

Aricle Image for How program logic can work for NGOs

October 2017

By Director Jade Maloney and Adelaide-based Senior Consultant Jane Ford

It might sound academic but program logic is a very practical tool to help strengthen your program design and show how it works. This is important for NGOs competing for government funding as they are increasingly being asked to set out the evidence base for their programs and a plan for evaluation.

On a wet and wild Wednesday in Adelaide, Jade Maloney and Jane Ford from ARTD and our South Australian Associates, Sharon Floyd and David Egege, met with a group of NGOs to discuss how can put program logic to use. These are our top tips. 

  • Develop both a program logic and a theory of change. What's the difference? A program has only one logic - this is in the form of a diagram. But it can have many theories - for the program overall and how you expect to move between activities and outcomes. The theory of change comes in the form of a story.
  • Reality check. Strengthen your program design by asking: What reason do we have to believe what we’re doing will lead to the change we want to see? Don’t be afraid to look at the research literature and social science theories which can provide a logical framework for your program’s activities. If you want to change individual behaviour, consider the theory of planned behaviour (Ajzen & Fishbein, 1980) or stages of change (Prochaska J.O., DiClemente C.C., Norcross J.C. (1992). If you want to create community level change, consider theories relevant to the change you’re aiming to create or theories of community development.
  • Keep the big picture in sight. It’s true that your organisation alone can't be held accountable for high-level policy goals (such as increasing inclusion of people with disability or reducing youth unemployment) because other organisations and environmental factors will also help to shape these outcomes. Some people see this as a reason to exclude high-level goals from your logic. But we suggest you need to include them so you can map out a plausible pathway to contribute to long-term goals, counter factors that will negatively impact on achievements, and take advantage of complementary partnerships.
  • Be specific and think in outcomes. What are the outcomes you are aiming to achieve at each step along the way? What would these look like in real life? If you’re not specific, and think in outputs rather than outcomes, you won’t be able to measure your program’s achievements and make adjustments if needed to stay on track.
  • Refine in real time. The diagram is not the destination. Contexts shift. Programs evolve. Updating your diagram doesn't mean you failed. It means you're learning from implementation and will be better placed to succeed.

Want to get under the hood of how a policy or program really generates outcomes?

Aricle Image for Want to get under the hood of how a policy or program really generates outcomes?

October 2017

Then it’s time to learn more about realist evaluation.

At ARTD, we’ve long seen the value of applying a realist lens to evaluation. We understand the causal power of programs lies in the mechanisms they fire in certain contexts and that different things work for different people.

We think that a realist lens has huge potential to help policy makers better select appropriate approaches for particular contexts, and to adapt their policies and programs to different contexts. But we know it can be difficult to translate promise into practice

That’s why we’re sponsoring this year’s International Conference for Realist Research, Evaluation and Synthesis in Brisbane this October 24–26.

Come along to the pre-conference workshop, ‘Introduction to realist evaluation’, with Director Andrew Hawkins  on Monday 23 October to explore how policies and programs really generate change and what it means to take a realist approach. Or catch Andrew  on ‘Testing realist program theory – quantitative and qualitative impact evaluation’ on Wednesday 25 October (11–11.45am).

If you’re at the conference, come and speak to one of our senior staff. If you can't make it, catch up on conference highlights on Twitter and the ARTD blog.


Adelaide program logic workshops

October 2017

Program logic can help you develop design a program that works and provide the basis for effective monitoring and evaluation. With the growing focus on outcomes measurement. Program logic has a key role to play.

To mark our new presence in Adelaide, we are offering two free program logic ‘taster’ workshops next week. 

Workshop 1

When October 10 from 2–4 pm

Where Intersect, 167 Flinders St, Adelaide

Register using this link.

Workshop 2

When October 11 from 2–4 pm

Where Victim Support Services, 33 Franklin St, Adelaide

Register using this link.

Questions: contact Jade.Maloney@artd.com.au 


Is your program logic logical?

October 2017

By Director, Andrew Hawkins

The term program logic implies the model you develop is logical. But is it? Often the links between outputs, short-, medium- and long-term outcomes in a program logic model look more like a wish list than something that would logically follow from program activities.

We need to put the logic back into program logic so it can help us design programs that work and effectively evaluate programs. To do this, I think evaluators need to do five things

1. Recognise the difference between program logic and theory of change. A program has one logic but can have many theories covering the program overall and different levels of the program logic.  For example, in a parenting program, there may be a theory about why improving certain mothers’ parenting skills in a certain way will lead to better parenting, but there may also be a theory as to why distinct approaches to marketing the program will work differently for mothers depending on their circumstances, motivations, media consumption etc.  

2. Understand and use theories of causality that are useful for different purposes. There are three main theories of causality that evaluators might draw on:

  • Often program logics employ a successionist theory of causality where programs are reduced to a series of cause and effect relationships in a chain. But social programs are more complicated than this suggests.
  • A configurationalist theory of causality is often most useful for program logics in terms of social intervention. This theory of causality recognises that a range of things need to come together in a ‘causal package’ to bring about a change. It’s like baking a cake: for success, you need the right combination of ingredients, mixed in the right way, and placed in the right context (i.e. an oven at the right temperature).
  • If you want to understand how and why change happens, you are best off with a generative theory of causality, as a realist would use. A generative theory of causality explains the underlying mechanism of change, for example, how the ’rising agent’ in the cake works. You wouldn’t generally represent this in the program logic model, but would describe it in a narrative, table or other figure that sits alongside the program logic.

3. See programs as arguments with premises and a conclusion. A program seen through the lens of informal logic is an argument. The premises in the argument are the outputs of program activities (as well as our assumptions), and the conclusion is the intended immediate or short-term outcome. In a good argument, if all the premises were true and assumptions held, the short-term intended outcomes would follow with a high degree of probability. Theory can provide an underlying ‘reason to think this will work’. This is a special case of the broader class of ‘warrants’ that provide reasons to accept an argument about a course of action.

4. Evaluate the logic of program design, implementation and outcomes. This may require three stages:

  • Does it make sense on paper? If all the necessary conditions were achieved and our assumptions held, would these be sufficient to achieve our immediate- or short-term intended outcomes?
  • Was it effective? Seek evidence to empirically test premises to determine if the argument is well-grounded. Did these conditions together form a causal package that was sufficient to achieve our intended immediate- or short-term outcome?
  • Was it efficient? Was each condition actually necessary, or could the program be streamlined?

The empirical evidence generated by evaluation methods can be used to refine the logic and theories. This can inform not only further implementation of the program under study, but the design and implementation of future programs of a similar nature.

5. Be realistic about what evaluation can usefully measure. We know that programs and policies are not always fully developed when they are implemented. It is also common that a program will be sufficient for an immediate- or short-term outcome, but will only contribute to longer-term outcomes alongside external factors. Taking a logical approach, we can conduct a very cost-effective evaluation by focusing on the most important parts of a program, given the maturity and current state of knowledge about the value of the program. This will ensure we really do generate evidence and insight to inform decision-making.

 


Weave wins mental health award

Aricle Image for Weave wins mental health award

September 2017

Last night, Weave Youth and Community Services won the Aboriginal Social and Emotional Wellbeing Award at the Mental Health Matters Awards for its project Stories of Lived Experience. This included a photography exhibition, a documentary and an evaluation.

In 2015, ARTD developed the qualitative evaluation. We spoke with four generations of families and over 50 clients that have been connected to Weave since its beginning in 1975 to understand what is most useful about how Weave works; the difference that Weave makes for clients and the community, and how Weave and the sector can improve its support in this area. Thank you to all the participants and stakeholders for entrusting us with their views and experiences to form such a great evaluation project.

The Mental Health Matters awards are held by WayAhead (the Mental Health Association of NSW). The Aboriginal Social and Emotional Wellbeing Award recognises programs, projects, people or initiatives that aspire to and enhance the social and emotional wellbeing of Aboriginal communities in NSW. 


Brick walls or byways to evaluation use

Aricle Image for Brick walls or byways to evaluation use

September 2017

By Director, Jade Maloney

Last week we looked at how Australasian Evaluation Society members have successfully overcome some of the obstacles they have encountered to use. This week we look at main brick walls my research with Australasian Evaluation Society identified to evaluation use, and some byways around them.

Politics: Unsurprisingly, given the intertwining of politics and policy, ‘politics was the most commonly identified brick wall to use of evaluation findings. However, some had found byways around political roadblocks. These were:

  • keeping the report in the bottom drawer until the context is right
  • arming communities with evaluation findings so they can encourage governments to act or to negotiate alternative sources of funding when required
  • using the evaluation to speak truth to power if the internal context is right
  • focusing on conceptual instead of instrumental use – the knowledge that can be applied in other projects.

Resourcing: Linked to the political roadblock, lack of resourcing was also commonly identified as a brick to use of evaluation findings. When governments change, priorities can also change and resources become unavailable. But lack of resourcing can also limit the take up of findings from pilot program evaluations. However, by equipping communities with evaluation findings they can attract other sources of funding.

Timing: Some evaluators reported that delays between a report being written and it being released can limit the relevance of findings for use. In contexts with staff on short-term contracts or on rotation, staff can have moved on, and the project have ended by the time findings are available.

More broadly, limited timeframes for evaluation can limit the type of questions that can be answered by an evaluation and, thereby, the ways in which it can be used. In this case, it was important to settle on reasonable indicators.

Cross-agency collaboration: Two evaluators involved in cross-agency initiatives said that the blurred lines of responsibility in these contexts could create roadblocks to evaluation use. Cross-agency response systems are required to address this.

Dissemination of findings: Lastly, some evaluators identified a lack of processes for broadly disseminating evaluation findings as a barrier to use. This is particularly important given evidence of the need for an accumulation of evaluation evidence before change occurs. However, the increase in policies requiring government agencies to publish findings is shifting this roadblock (although this may have its own implications for the type of learnings that get reported). Additionally, one internal evaluator said their agency had established an internal ‘lessons learned’ register to support broader use of findings; while another had established evaluator networks to share learnings. Byways unidentified by external evaluators included presenting at conferences, specifically identifying broader learnings for policy design and delivery, and negotiating with clients to share general learnings from evaluations with other clients. 


Overcoming obstacles to evaluation use

Aricle Image for Overcoming obstacles to evaluation use

September 2017

By Director, Jade Maloney

An evaluator might weep at the plethora of literature that shows their work going unused. But my recent research with Australasian Evaluation Society members shows evaluators have had success in overcoming at least some of the obstacles they have encountered to use.

Disinterest in, or more active resistance to, evaluation: Some evaluators I interviewed had been able to overcome this by: selling the value of evaluation; engaging stakeholders where they are at and from a ‘what’s in it for me?’ perspective; and involving stakeholders in a dialogue as data is collected to answer their questions.

Fear of judgement: In some cases, resistance to evaluation was related to a fear of being judged and a misconception about evaluation. Some evaluators had overcome this obstacle by convincing stakeholders they were there to help them with their work, using their interpersonal skills to make connections with stakeholders.

In some cases, fear related to evaluation construed as an accountability or compliance mechanism rather than a learning exercise. To address this, some evaluators emphasised learning and strengths-based approaches. Others, however, suggested a need to combine accountability and learning purposes or saw accountability in a more positive light.

Lack of purpose: when they encountered evaluations undertaken as a ‘tick-a-box’ compliance activity or without a clear purpose, some evaluators had been able to help stakeholders find a purpose. They helped them work through the questions they wanted to ask, and who would use the evaluation.

Negative findings: Given evidence of the human tendency to accept information that confirms our preconceptions and to refute information that challenges them (Oswald & Grosjean, 2004), it is unsurprising that evaluators had encountered more resistance to negative findings (and positive findings about programs that were slated to be discontinued). To overcome this obstacle, evaluators prepared stakeholders for what they might not expect from the outset, surfacing biases that needed to be overcome. When negative findings eventuated, they shifted resistance by socialising findings as they emerged, use the positive sandwich approach to frame findings and/or focus on lessons learned and solutions, as identified in the following quote. However, it was noted that these strategies can fail when working with organisations that lack a learning culture and when findings are politically unpalatable.

Next time, a look at the common brick walls to evaluation use and whether evaluators have alternative routes around them.


Sue Leahy joins AES Board

Aricle Image for Sue Leahy joins AES Board

September 2017

Our Managing Principal Consultant, Sue Leahy, has joined the ranks of the Australasian Evaluation Society (AES) Board.

Sue is looking forward to working with other Board members to strengthen the role of evaluation in Australasia and shape an AES that continues to meet the needs of its members for professional growth and networking.

It’s an exciting time for the AES, as we contemplate appropriate pathways to professionalisation of evaluation and continue to develop the interest and capacity for evaluation among Indigenous people. 


Congratulations Lex Ellinson – AES Emerging New Talent!

Aricle Image for Congratulations Lex Ellinson – AES Emerging New Talent!

September 2017

Senior Consultant, Alexandra (Lex) Ellinson, was awarded the Emerging New Talent at the Australasian Evaluation Society International Evaluation Conference.

The Awards Committee praised the scale of what she has achieved in just four years as well as her commitment to the profession.

Lex has designed and managed large-scale long-term evaluations of youth policy and social housing and worked in partnership with Indigenous communities. Her approach is informed and pragmatic – she has managed projects with quasi-experimental, realist-informed and co-design approaches to suit different contexts and different purposes.  

As Secretary of the AES’ Advocacy and Alliances Committee, she works to promote the use of evaluation and evaluative thinking by Australian agencies and organisations. She is also a member of the Realist Special Interest Group.

Lex discovered evaluation as a profession while studying philosophy and social sciences at university in the early 2000s, and sensed that, “evaluation nicely brings together my academic interest in the rational grounds for making normative judgements based on empirical data, which is a longstanding philosophical question.”

Accepting the award, Lex couldn’t help but demonstrate the reflective thinking that we know and love. She reflected on the inquisitive minds and commitment to doing good that she’d encountered at the conference and her excitement at being part of a profession capable of having such wide-ranging conversations.  At Gill Westhorp’s pre-conference workshop, they’d discussed ‘everything from the ontology of a tea cup to patronage systems in Laos’. 

Congratulations Lex. We’re sure there are more great things to come!


Can evaluation live up to its potential?

Aricle Image for Can evaluation live up to its potential?

September 2017

By Jade Maloney, Andrew Hawkins, Sue Leahy, Alexandra Ellinson and Jane Ford

On a chilly Canberra morning, Sandra Mathison lit a fire under few hundred evaluators at the Australasian Evaluation Society (AES) International Evaluation Conference.

“Evaluation is a helping profession,” she told us. But it has long underperformed on its promise to support the public good because it is constrained by dominant ideologies, is a service for those with resources, and works in closed systems that tend to maintain the status quo.

Mathison challenged us to think about how we could turn this around, suggesting two ways forward. The first was ensuring those who are meant to benefit from a program have input into evaluation and access to the findings so we are ‘speaking truth to the powerless’ (No that’s not a typo, the more clichéd ‘speaking truth to the power’, she said was ‘a valiant but often futile act’). The second was more ‘independent’ evaluation – that is, evaluation independent of funding agencies and perhaps even independent of individual programs in ‘place-based’ evaluation. (Intrigued? Read about the rest of her speech in Stephen Easton’s piece in The Mandarin.)

On Day 2 Richard Weston from the CEO of the Healing Foundation took up the torch and challenged us to reflect on how evaluation could recognise and value Indigenous knowledges and collective as well as individual outcomes. And on Day 3 Dugan Fraser told us that we could conceptualise evaluation, not as assessing efficiency and effectiveness, but as supporting democracy – as a means of empowering citizens to interact with power and ask questions.

The concept of ‘speaking truth to the powerless’ clearly cut across the conference sessions and resonated with the audience. Less certain was the response to Mathison’s call for more ‘independent’ evaluation. One member of the audience asked Mathison whether we should not instead be focused on democratising evaluation.

Certainly, what Mathison was proposing is not inconsistent with participatory approaches and design thinking (another big theme of the conference). But it does seem somewhat at odds with Patton’s call in Utilisation-Focused Evaluation to engage all key stakeholders in the evaluation process.

In their session on policy logic, Carolyn Page and Russell Ayres set out the value of bringing all levels of expertise to the table. Our Director Jade Maloney highlighted the importance of engaging stakeholders, forging relationships and communicating well to support use – findings from her research into evaluation use in the Australian context.

Others identified ways to hardwire evaluation into government infrastructure. Economist, Nicholas Gruen championed the idea of an Evaluator General. In another session, the Commonwealth PGPA Act 2013 was identified as providing an opportunity if evaluation can take up the challenge – that is, if evaluation can help agencies understand their performance, alongside monitoring and KPIs. This will require better integration across evaluations.

Thankfully, the arguments about hierarchies of evidence that dominated conference discussions a decade ago seem to have fallen by the wayside in the wake of more nuanced discussions about what constitutes good evidence. At the conference dinner, Nicholas Gruen set the challenge for evaluation to ask the right questions and noted that randomised control trials (RCTs) are not the panacea they’re made out to be. (Yes an economist said this!). The next morning ARTD Director Andrew Hawkins extended this idea by outlining a framework for choosing different methods depending on the causal questions being asked and the resources available.

All this, and we haven’t even mentioned how realist evaluator,, Gill Westhorp, gave us five different ways to think about classifying mechanisms and prompted us to think more about how we use theory in evaluation at all levels. But there’ll be time for that at the International Conference for Realist Research, Evaluation and Synthesis in October.

So we’ll end with the themes that will stick with us. We need to work meaningfully with beneficiaries, trial and fail fast, share and build on learnings from success and failure, break down silos and engage in cross-disciplinary conversations. If we do this, we believe evaluation will live up to its promise.

Let’s continue the conversation through AES network meetings until we all get together again next year!

 


aes17 International Evaluation Conference

August 2017

ARTD is looking forward to discussing how to make evaluation a more useful and effective governance tool with other members of the Australian Evaluation Society (AES) next week at the aes17 International Conference in Canberra.

ARTD is again a sponsor of the event, which runs next week 4-6 September at the National Convention Centre. This year we’re sponsoring keynote speaker Sandra Mathison, who will examine whether evaluation contributes to the public good. For more information about the conference head to: http://conference2017.aes.asn.au/.

This year, two of ARTD’s directors are also presenting.

Andrew Hawkins’ presentation ‘Intervention: putting the logic back into logic models’ (Tuesday 5 Sep, 9.30am, Murray Room) will examine the shortcomings of logic models and challenge practitioners to better explain causal mechanisms. Click here for more information on Andrew’s presentation.

Jade Maloney’s presentation ‘Evaluation: what’s the use?’ (Tuesday 5 Sep, 3pm, Torrens Room) will highlight findings from her new research, which examines the problem of non-use and how practitioners can ensure their evaluations don't end up gathering dust on a shelf. Click here for more information on Jade’s presentation.

If you are at the conference, please come and speak to one of our senior staff.


Increasing access and inclusion in Fairfield

Aricle Image for Increasing access and inclusion in Fairfield

August 2017

On Monday 7 August, Fairfield City Council launched their disability inclusion action plan. The plan details what Council will do to encourage positive community attitudes and behaviours towards people with disability, make it easier to get around the community, support access to meaningful employment opportunities and improve access to Council information, services and facilities.

The launch represented the completion of all local council DIAPs in NSW, required under the NSW Disability Inclusion Act 2014. It was also a valuable opportunity to connect the local community with disability service providers, with Council hosting information stalls for local NDIS-registered providers.

Fairfield City Council’s Mayor launched the Plan. The launch was also attended by the Minister for Disability Services, Ray Williams, who explained that disability inclusion action planning is both “unique and unprecedented in Australia. It means that State and Local Governments are working together to ensure people with disability can fully enjoy the opportunities available in our communities.”

ARTD worked with Fairfield City Council to develop their DIAP, using a co-design approach grounded in the principle of ‘nothing about us without us’. We consulted people with disability, their family and carers and local service providers to understand the barriers people face when interacting with Council and getting around the community, and what they saw as the priorities.

To ensure the consultation process was accessible and inclusive, we used accessible venues with hearing loops installed, and drew on bilingual educators to conduct consultations in local community languages. We also worked with our partner, the Information Access Group, to produce an Easy Read consultation guide, as well as the final DIAP in Easy Read format.

We then worked with Council staff and managers to develop a realistic plan of action for the next four years and indicators to track progress. 

You can access Fairfield City’s DIAP on the Council website.  

[Photo by Fairfield City Council, featuring Mayor, Frank Carbone; Fairfield City Leisure Centres trainer Fred Zhao; and the Hon Ray Williams MP]


Automation set to transform the government service landscape – 2017 IPAA Conference

July 2017

A new round of digital transformation and automation is about to transform the government workforce and the way government delivers services.

The 2017 IPAA conference – of which ARTD was a key sponsor – emphasised the need for the government sector to think creatively about how digital technologies will work for staff and clients.

ARTD Managing Principal Consultant, Sue Leahy, says that the clear message at the conference was that while the focus tends to be on the technology, managing digital change starts with understanding the people issues.

Managing change across the sector will mean understanding how digital technologies can be used to improve the workplace or client experience.

We need to put people at the centre of digital transformation. That’s where innovative models of engagement will come in – to ensure we deeply understand what people need from government services.

Artificial intelligence (AI) technology is also advancing rapidly and will soon transform the way we work in government, reducing transactional, information sifting type work and creating new roles. AI technology has the power to reduce paperwork and change the way we deliver services but it will also require active change management as people’s roles are disrupted. New jobs such as robotic orchestration, robot teachers, empathy trainers, business ethics (reviewing inputs and outputs), and privacy guardians will be created.

To ensure that we get the best out of the technology on offer, it will be important that we start with a deep understanding of the human aspects of the problems technological change aims to solve.


Co-design creatively boosts access to primary health services for Aboriginal families

July 2017

How do you make Victoria’s Maternal and Child primary health services more accessible and culturally appropriate for Aboriginal families?

That’s the question ARTD, the Victorian Government and Aboriginal people have been grappling with in an innovative co-design project aimed at understanding the barriers that prevent Aboriginal families from using the universal primary health services.

As part of the Victorian Government’s $1.6 million program, Roadmap for Reform: Strong Families, Safe Children, ARTD was commissioned to provide a way for Aboriginal communities to have a voice in designing the service models they need.

The co-design process recognises that Aboriginal people bring experience and expertise about accessing services to the table. It reflects a new approach to designing services for people rather than asking them to fit the mould.

ARTD used a range of narrative-based techniques to deeply understand the needs of the target group before collaboratively generating ideas with stakeholders about the best ways of meeting those needs.

The process produced a service model to support self-determination and deliver high quality, coordinated and culturally safe Maternal and Child primary health services for Aboriginal families that will be trialled in local government, Aboriginal Community Controlled Organisations (ACCOs) and in integrated partnerships. 


Digital disruption and the IPAA NSW State Conference

June 2017

Digital technologies are continuing to transform the way that governments operate and how they interact with citizens and clients of services. Increased mobility, connectivity and data storage capacity provide opportunities to improve services and productivity.

We work with government agencies aiming to harness these opportunities. We can support agencies to ensure their system design is user-centred and accessible to people of all abilities. We can also assist agencies to engage with their clients and citizens through secure online platforms with the ability to tailor communications and surveys to particular target groups. We have also designed an innovative app for agencies to monitor and capture their client’s important life outcomes.  

To support digital transformation in NSW, ARTD is sponsoring the IPAA NSW 2017 State Conference on 15 June. This year’s theme is ‘Innovating, Emulating and Enabling: How the digital transformation is changing public sector skill sets.’ More than 20 experts will speak on topics such as data security, big data, and how the NSW Public Sector is currently embracing digital technologies. For more details about the IPAA and the Conference, visit: https://www.nsw.ipaa.org.au/events/2017/conference.

If you’re at the conference come and speak to one of our staff at the coffee stand.


Find out more about the projects being funded to support the transition to the NDIS

April 2017

Wondering how people with disability and their families, as well as service providers are being supported to make the transition to the National Disability Insurance Scheme? What about initiatives to grow the market and develop innovative support options that respond to the choices of people with disability?

The NDIA, the Commonwealth and State and Territory Governments are undertaking a range of activities to support the transition. A Sector Development Fund was also established to fund activities that support the transition between 2012–13 and 2017–18. Combined, these activities will assist in the establishment of a flourishing disability support market that offers people with disability choice and control.

To date, a range of initiatives supporting people with disability and their families, providers, and workforce and market growth have been funded through the Sector Development Fund. You can find out more about what these activities are and access the resources they have produced through ARTD’s profiles of the funded projects now on the NDIS website https://www.ndis.gov.au/sdf.html.  


ARTD’s Independent Review of Special Religious Education (SRE) and Special Education in Ethics (SEE) in NSW has been released

April 2017

In 2014, the NSW Department of Education commissioned an independent review of the implementation of SRE and SEE classes in NSW Government schools ‘to examine the implementation of SRE and SEE and report on the performance of the Department, schools and providers’. The Review was commissioned in response to Recommendation 14 of the Legislative Council General Response Standing Committee No. 2: Report No. 38 Education Amendment (Ethics Classes Repeal) Bill 2011 (May 2012) which also specified areas for the review to cover. These became the basis of the Terms of Reference:

 1.The nature and extent of SRE and SEE

2. Department of Education implementation procedures for SRE and SEE including: parent/ carer choice through the enrolment process and opting out; approval of SRE and SEE providers by DoE; authorisation of volunteer teachers and curriculum by providers

3. Development of complaints procedures and protocols

4. SRE and SEE providers’ training structures

5. Registration of SRE and SEE Boards, Associations and Committees

6. New modes and patterns of delivery using technology

7. Pedagogy, relevance, age appropriateness of teaching and learning across all Years K to 10 and teaching and learning in SEE in Years K to 6 in a variety of demographics

8. The need for annual confirmation by parents and caregivers on SRE choice or opting out

9. Review of activities and level of supervision for students who do not attend SRE or SEE.

 The Review examined the implementation of SRE and SEE in NSW Government schools between December 2014 and September 2015. This report outlines findings related to each Term of Reference and makes recommendations. Because SRE and SEE are quite distinct, they are dealt with separately throughout this report. In the current context, there are polarised views in the community about the place of SRE or SEE in NSW Government schools. While the continuation of SRE or SEE in NSW Government schools is out of scope of this Review, this was a concern for many people and influenced responses to the Review.

 Review methodology

 The Review used a comprehensive mix of methods to collect quantitative data across all schools, and the wider community, as well as in-depth and qualitative data from key stakeholders. The methods were chosen to allow all interested stakeholders and the community the opportunity to present their views so that the findings and recommendations are based on a systematic and balanced assessment. Evidence was reviewed and data collected between December 2014 and September 2015.

 The main methods for the Review were:

 ▪ Document scan. Departmental and provider documents/ websites were reviewed, including the 2014 and 2015 SRE and SEE policy and implementation procedures, and the websites of all current providers in December 2014 for their SRE or SEE curriculum scope and sequence documents and outlines.

 ▪ Curriculum review. An experienced education expert conducted a systematic criterion-based assessment of curriculum materials, based on materials from current SRE providers and the current SEE provider.

 ▪ Consultations

Surveys and interviews. Systematic data were collected via surveys of key stakeholder groups. Opportunity to respond was offered to all principals (46% response rate), all SRE and SEE providers (80% response rate), all providers’ SRE coordinators (60% response rate) and all SEE coordinators (48% response rate). SRE and SEE teachers contributed via an online portal. These data were complemented by semi-structured interviews with members of the program evaluation reference group, and with peak provider, education and other relevant groups.

Cases studies. To examine how SRE and SEE is delivered in schools at the local level, the Reviewers undertook 14 case studies involving 12 SRE providers from 11 faith groups; and two case studies of the delivery of SEE. The case studies used faceto-face interviews with coordinators, teachers, principals, and other stakeholders. They were effective in telling the story of local delivery in very different contexts.

Online community consultation. To collect perspectives from the broader community under the Terms of Reference, online contribution portals for parents/ caregivers; and other interested parties were set up and accessible for six months. The Review received over 10,000 responses, reflecting the high level of interest in sections of the community. The Reviewers recognise that while the responses reflect significant issues for those who responded, to some degree they reflect the two polarised positions in the community around SRE and SEE, and cannot be considered as representative of the whole NSW community. Indeed, the Reviewers are aware that some groups were active in encouraging their constituents to contribute, and in some cases suggested wording.

 Confidence in the findings

Overall, the Reviewers are confident that the findings from these methods reflect the broad patterns of implementation of SRE and SEE and provide a sound basis for addressing the Terms of Reference and making suitable recommendations. The methods were implemented effectively and there was a high degree of consistency between the wider findings from the surveys; the interviews/ group discussions with significant stakeholders; and the on-ground findings from the local case studies. The data from the online contribution portals is less balanced and has been used with caution, but it is generally not inconsistent with the other methods, and has been useful in raising issues.

 

The Review made fifty-six recommendations, based on the evidence presented in the report.

 You can access the full report on the Department of Education website using this link.

 The release of the Review and the NSW Government’s response was reported in The Guardian, The Sydney Morning Herald, The Australian and the Daily Telegraph.


Youth Frontiers evaluation released

Aricle Image for Youth Frontiers evaluation released

April 2017

The findings of our evaluation of Youth Frontiers—a NSW-wide mentoring program—have been released by the Youth Participation & Strategy Unit in the NSW Department of Family and Community Services (FACS). Overall, the evaluation found the program had an impressive reach across NSW and was achieving positive developmental outcomes for young people. However, it is not making a direct contribution to increasing community participation, so there are some opportunities to strengthen the program’s design.

The report is being used inform implementation of the program in 2017 and strategic decisions about the program mix that will support FACS' priorities for youth development in NSW. These priorities are set out in the NSW Strategic Plan for Children and Young People.

This report follows on from our evaluation of the initial pilot of the program, the findings of which were used to inform the ongoing roll-out of the program by the four funded providers delivering mentoring to students in Years 8 and 9.

ARTD is continuing to support successful implementation of the program through a monitoring system that includes post-program surveys to mentors and mentees and reports on these.


Planning with people with disability

March 2017

Want to get the voice of people with disability in policy planning or program evaluation, but unsure how to go about it?

In NSW, government agencies and local councils need to develop disability inclusion action plans (DIAPs) with people with disability. More broadly, as policy co-design catches on, government agencies need to make sure consultation processes are accessible and inclusive.

A commitment to inclusion and attention to access upfront and along the way will enable people with disability to shape policies that affect their lives and communities.

When our Principal Consultant, Jade Maloney, worked with Mosman Council on their DIAP, both organisations had a clear commitment to making the process and the final product accessible and inclusive. This meant using accessible venues and working with our partner, the Information Access Group, to produce an Easy Read consultation guide, as well as converting the final Mosman DIAP into Easy Read.

Easy Read is a way of presenting information in a way that is easy to understand. In its simplest form, it uses images to support text, a large font size and plenty of white space to increase readability. It makes documents more accessible for people with intellectual disability, people with literacy issue and people who speak English as a second language.

“We’re committed to ‘nothing about us without us’”, Principal Consultant, Jade Maloney, says. “But we’re also conscious that it can be very difficult for people with disability to even get into community consultations, let alone share their views. So we need to make sure we provide information in accessible formats, use venues and formats that are accessible, give people time and different options to share their views.

“In recent years, we’ve worked with the Information Access Group on a range of policy development and evaluation projects. We’ve found the Easy Read versions have worked well to break down complex concepts and give people with intellectual disability the space to share their views.

“When we ran the public consultation sessions for the National Disability Insurance Scheme (NDIS) Quality and Safeguard Framework we also developed a process to make sure we’d covered off on all access requirements—from physical access for people with limited mobility, hearing loops and closed captioning for people with hearing impairments and Auslan interpreters for those who use sign language.

“We’re currently putting this experience into practice again working with Fairfield City Council to develop their DIAP."

You can see what actions Mosman Council is taking to increase inclusion and job opportunities for people with disability and make it easier for people to get around in their community in their DIAP


Devil in the detail: Why ethics processes for evaluation must be revisited

March 2017

Formal ethics approval processes designed to protect study participants are often causing their own unintended harms in the context of program evaluation.

This is the somewhat controversial message, Principal Consultants Sue Leahy and Jade Maloney, gave to a full Australasian Evaluation Society (NSW) forum, hot on the heels of their presentation at last year’s AES conference in Perth.

They don’t deny that ethical practice is critical to evaluation, just that formal ethics approval processes are always needed or always result in an ethical evaluation. 

“It’s our experience that the rules around ethics approval are often applied differently across government and that the requirement for external approval can mean the voice of the ’vulnerable’ people accessing the service is not heard in the evaluation because of the timeframes or because the consent process is overwhelming,” Jade says.

When it comes to consulting with Aboriginal communities in NSW, it gets even more difficult.

“While the principles around ensuring Aboriginal communities have ownership of the research in their communities are right,” Sue says, “We question the characterisation of all Aboriginal people as vulnerable. We also find the timeframes for approval through the Aboriginal Health and Medical Research Council can mean that consultation with Aboriginal communities is squeezed out of the evaluation timeframe.

The main problem seems to be that ethics processes were set up to approve clinical trials, but evaluations are not interventions. That’s why ARTD is driving a discussion in the evaluation community about options to improve the process.

AES members at the forum recognised that ethics approval processes bring significant benefits to the evaluation process, but they agreed change is needed. Over the coming months, ARTD hopes to explore options with colleagues, and to work on guidelines to help governments identify when they need formal ethics approval processes.  


Capturing outcomes with Caretakers Cottage

February 2017

Through the Expert Advice Exchange (EAX), ARTD have worked witrih Caretakers Cottage to capture the outcomes of their transitional accommodation service, Options Youth Housing.

The EAX, an initiative of the NSW Government’s Office of Social Impact Investment, connects non-government organisations and social enterprises with leading legal and professional services firms, financial institutions and other companies to provide pro bono on growing and sustaining their impact.

When Caretakers Cottage first came to us, they said they needed some support with strategic planning. But after our Managing Pncipal Consultant Sue Leahy and Senior Consultant Alexandra Ellinson talked with them further and did a detailed SWOT analysis with the team, we realised that what they needed was to capture their outcomes. So we worked with them on a tool to capture client wellbeing in seven domains, from living skills and education to accommodation, family connections, health, safety and wellbeing. By applying the tool at entry, every three months during the service, and after leaving, caseworkers systematically collect information to assess and document changes. Caseworkers can also use the assessment as an opportunity to have supportive conversations with clients about their wellbeing, and to develop a shared understanding of their progress.

Caretakers Cottage was concerned about not creating more work for the sake of doing more work. But they have found that the tool has helped them to better monitor their work and to focus on what the client needs. As well as increasing the staff’s capacity to consistently assess client wellbeing, it really energised the staff because they are now collecting evidence that shows the impact of their work. 


NDIS Quality and Safeguards Framework released

February 2017

The Framework sets out a nationally consistent approach to quality and safeguards for the NDIS to be introduced in 2018–19. This is needed to ensure that capability is built in the new market-based system, the rights of people with disability are upheld, the benefits of the NDIS are realised, and that people have access to the same safeguards no matter where they live.

The Framework is aligned with the NDIS principle of choice and control and takes a risk based approach. It consists of developmental, preventative and corrective measures targeted at individuals, the workforce and providers. Key components are:

  • an NDIS Registrar to manage the NDIS practice standards and certification scheme, establish policy settings for worker screening, register providers, oversee provider compliance, and monitor the effectiveness of the NDIS market
  • an NDIS Complaints Commissioner to facilitate the resolution of complaints about providers of NDIS-funded supports and investigate serious incident reports and potential breaches of NDIS code of conduct
  • a Senior Practitioner to oversee approved behaviour support practitioners and providers, provide best practice advice, review the use of restrictive practices and follow up on serious incidents related to unmet behaviour support needs.

ARTD worked with DSS, State and Territory Governments and the NDIA to develop the framework. The voice of people with disability, their family members and carers, service providers, advocacy groups and representatives of professional organisations captured through public consultations and submissions informed the framework design.

You can access the framework here https://www.dss.gov.au/disability-and-carers/programs-services/for-people-with-disability/ndis-quality-and-safeguarding-framework-0. You can also see our report on the consultations here https://engage.dss.gov.au/wp-content/uploads/2015/11/consultation_report_ndis_quality_safeguarding_framework.pdf.

Governments have indicated there will also be further opportunities to contribute to the Framework in the design and implementation phases.


Broadening your evidence cost-effectively

February 2017

ARTD Director, Andrew Hawkins, is speaking at the Strengthening Evidence-Based Policy Improving program outcomes and success conference in Canberra on 21 March 2017. He joins other thought leaders in policy, research and evaluation to discuss how sound evidence can be used to improve policy.

Andrew’s presentation will draw implications from the academic research to meet the needs of policy makers for cost-effective monitoring and evaluation of interventions into complex systems. He will challenge the audience to move away from a set of assumptions about measuring outcomes that are useful when evaluating relatively simple things—like fertilisers, drugs and bridges—towards methods that are more appropriate when dealing with new or developing interventions in very complex systems. Cutting through the complexity, he will provide a step-by-step guide for monitoring and evaluation investment decisions that are appropriate to the nature of the intervention, the questions being asked, and the time and resources available.

Early bird registrations for the two-day conference close on 10 February.