News & Blog

Applied ethics in program evaluation

Aricle Image for Applied ethics in program evaluation

April 2019

By Lia Oliver

As evaluators, ethics is key when considering if, when and how and we will conduct our projects. Ethical decisions shape the validity and reliability of the evaluations produced. We also know that clients of government social policies and programs are often vulnerable community members, and so its paramount that evaluators of these programs engage sensitively and ethically.

Our recent internal training, led by Brad Astbury, was a chance for us to refresh our understanding of ethics and flex our ethical reflexes through some challenging case studies.

The session reinforced why applied ethics in program evaluation is so important.

Evaluation is the most ethically challenging of the approaches to research inquiry because it is the most likely to involve hidden agendae, vendettas, and serious professional and personal consequences to individuals. Because of this feature, evaluators need to exercise extraordinary circumspection before engaging in an evaluation study. (Mabry, 1997, p.1)

Ethics is about more than HREC processes

In some evaluations, we must seek formal approval from a Human Research Ethics Committee (HREC), and this is often what evaluators think of when they think of ethics. HREC applications address ethical concerns, such as informed consent, voluntary participation, confidentiality, privacy, anonymity and benefits and risks of the research.

While these are all crucial, a successful ethics application does not guarantee an ethical evaluation. Evaluators need to consider ethics beyond the procedural aspects of an ethics application. Ethical awareness should be embedded throughout the duration of a project from planning processes and partnerships that recognise and explicitly address the need to be respectful to sharing evaluation findings with program stakeholders. This ensures the evaluation is honest, transparent, and credible.

Be wary of ethical fallacies

House (1993) outlined the ethical fallacies that frequently lead to ethical mistakes in professional evaluation:

  • clientism: doing whatever the client wants
  • managerialism: seeing program managers as the sole beneficiary
  • methodologism: believing that adherence to good methods is synonymous being ethical
  • relativism: accepting everyone’s opinion equally
  • pluralism/ elitism: giving a priority voice to the more powerful.

These ethical fallacies relate to the need to capture diverse stakeholders in evaluation so that we can understand the nature and role of different value perspectives. Some evaluation sponsors have powerful vested interests and it is important that evaluators can identify these to ensure they do not bias the findings. Often evaluators will need to balance the needs and voices of the client or program funder against the program beneficiaries to reach fair, accurate, representative and valid findings (House, 2008).

Normative ethical frameworks

Three main normative ethical viewpoints exist (Newman & Brown, 1996). These all relate to principles of autonomy, nonmaleficence, beneficence, justice and fidelity and often form the basis of codes of conducts that are used to identify appropriate ethical behaviour.

  • Consequentialist: drawing on utilitarianism to achieve the maximum happiness and the greatest good for the greatest number.
  • Deontological: Focusing on the morality of an action and whether that action itself is right or wrong, based on explicit rules or duties, rather than looking only at the consequences of the action
  • Virtue based: focusing on altruism.

Ethics in practice

For evaluators, ethical dilemmas can occur frequently across projects and sectors. Often, evaluators solve ethical dilemmas by referring to their intuition, talking to peers, as well as reviewing and drawing on past experiences.  Evaluators can also draw on relevant ethical guidelines such as the American Evaluation Association Guiding Principles for Evaluators (American Evaluation Association, 2018), the Program Evaluation Standards (Yarbrough et al., 2011) and the AES Code of Ethics (Australian Evaluation Society, 2013). Other useful resources include ethical decision-making flowcharts where you can map your ethical decision making to a particular action (Newman & Brown, 1996).

In our workshop, we discussed three different case studies in small groups. We identified the ethical concerns, developed a response each and discussed how the ethical concern would impact the evaluation. While each group had the same evaluation codes of conduct as a reference point, a range of solutions were developed.

We found that ethical dilemmas can be interpreted differently through the lenses of the different codes of conduct, ethical viewpoints and evaluators’ own experiences. When trying to resolve ethical dilemmas, there is never just one simple answer. In saying this, we cannot shy away from deep engagement and reflection on ethical practice in evaluation as this ensures our practice benefits and doesn’t harm participants, and findings are useful and credible. 

Reference List

American Evaluation Association (2018). AEA - American Evaluation Association : Guiding Principles for Evaluators. [online] Eval.org. Available at: https://www.eval.org/p/cm/ld/fid=51 [Accessed 15 Apr. 2019].

Australian Evaluation Society (2013). Code of Ethics. [online] Australian Evaluation Society. Available at: https://www.aes.asn.au/images/stories/files/membership/AES_Code_of_Ethics_web.pdf [Accessed 15 Apr. 2019].

House, E. (2013). Professional evaluation: social impact and political consequences. Sage Publishing.

House, E. (2008). Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation, 29(4), pp.416-426.

Mabry, L. (1997). Evaluation and the postmodern dilemma. United States: Emerald Publishing Limited Jai Press.

Newman, D. L., & Brown, R. D. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.

Yarbrough, D., Shulha, L., Hopson, R. and Caruthers, F. (2011). The program evaluation standards: A guide for evaluators and evaluation users. 3rd ed. Thousand Oaks, Calif.: SAGE.


The communications challenge: how can evaluation cut through?

Aricle Image for The communications challenge: how can evaluation cut through?

April 2019

By Jade Maloney, Emily Verstege and Ruby Leahy-Gatfield

Evaluation is about assessing the ‘merit, worth or value’ of something (in our context, this is usually a policy, an intervention or program) to inform decision-making. The significant amount of expenditure on evaluation, particularly by government, has been justified by the potential for use of evaluations to improve outcomes and ensure effective targeting of limited resources. But evaluation reports often go un-used, gathering dust on shelves.

A range of factors contribute to this (see Michael Quinn Patton’s Utilisation-Focused Evaluation and Jade’s research on evaluation use in the Australian context). Poor communication is one of these factors.

The Information Age

This is the age of information overload and evaluation is unlikely at the top of your audience’s to do list.

  • High volume: We’re overwhelmed by our email and social media accounts, big data and the wealth of references at our fingertips. There’s every chance your signal will be lost, if it doesn’t cut through all this noise.
  • Low memory: The human brain can only process a small amount of new information at once. The average person can only hold four chunks of information in their brain at a time. Is your chunk interesting enough to take the place of what your audience is having for lunch?
  • Jargon over meaning: It’s too easy to say things like “We used a theory-based approach to assess an innovative intervention to identify best practice components to leverage for future scalability.” But no one will know what you mean. (And do you?!)

The key communication issues

Evaluation reports can fail on communications in many ways. 

  • Complexity: Too much information is overwhelming. It makes it hard for your audience to know what’s important and even harder to remember it.
  • Misunderstanding: Too little information leaves your findings open to misinterpretation. Jargon contributes to this.
  • Boredom: Overly complicated and difficult to understand information makes it too easy for your audience to check out. A report needs to engage to have a chance of competing with what else is going on them.

When a report does not tell a story that engages, when key questions are left unanswered, and when the jargon overtakes the words, reading an evaluation report becomes a chore rather than a gift.

Our clear communications journey

Jade studied journalism and began her career in publishing before joining ARTD in 2008 and completed her Master of Arts in Creative Writing in 2013. Emily has worked as science writer and editor and—as her school report cards note—has always been a competent verbal communicator. Ruby worked in the communications team at International IDEA in Stockholm and has been known to stick grammar jokes up on the office walls.

Jade and Emily have been thinking about communication in evaluation for years. They first presented on How Effective Communication Can Help Evaluations Have More Influence at the Australian Evaluation Society International Evaluation Conference in Sydney in 2011. They’re both relieved and disappointed that much of their advice remains relevant.

Our favourite books about communication include Gabrielle Dolan’s Stories for Work, Chip and Dan Heath’s Ideas That Stick, Neil James’s Writing at Work, and Mark Tredinnick’s The Little Red Writing Book.

Our collective frustration is the wilful misuse of the possessive apostrophe (anyone for scrambled egg’s?) and evaluations that don’t reach their potential because they get stuck behind communication barriers.

The tips to come

In this seven-part Communication for Evaluation series, we’ll step you through our top tips for evaluation communications that cut through. We’ll cover ways to:

  • Know your audience
  • Make it a process not just a product
  • Structure it
  • Tell stories
  • Cut the clutter (use plain English)
  • Hack yourself to write better.

Amplifying social impact

Aricle Image for Amplifying social impact

April 2019

Jack Cassidy and Ken Fullerton

How can organisations improve their evidence-based decision making and, ultimately, their social impact? This was the key focus question in the latest AES professional learning seminar held in Sydney on 21 March.

Lena Etuk of the Centre for Social Impact (CSI) – a collaboration between UNSW, The University of Western Australia and Swinburne University of Technology – presented one approach to answering this question, describing CSI’s Amplify Social Impact project and how they aim to help organisations measure their social impact achievements.  

A major focus of the Amplify project is on connecting with and working with organisations working to address social issues in the areas of Housing Affordability and Homelessness, Education, Financial Wellbeing, Social Inclusion and Work. The project has three aspects: developing an evidence base and research agenda for key social issues, engaging and connecting stakeholders to design and implement innovative solutions, and developing an online platform for understanding social problems, measuring social impact and reporting and benchmarking social outcomes.

Lena’s presentation stimulated a lot of debate in the audience.

Outcome measurement

Lena, while acknowledging the significant amount of work and investment going into social purpose programs and initiatives in Australia, identified two related concerns with outcome measurement in the sector.

  1. Organisations aiming to have a social impact often use a wide variety of tools to measure their outcomes.
  2. A large proportion of the tools being used to measure social impact, including population and household level surveys have not been validated.

This means for those of us working in the social purposes sector, we are often unable to compare outcomes across similar services and programs and unable to truly know that we are measuring the outcome we say we are measuring.

CSI have collated a range of instruments and developed a register (called Indicator Engine) of over 700 indicators tailored for use by government, non-government organisations and social entrepreneurs.

Benchmarking

CSI’s intention is to develop a platform (called Yardstick) in which organisations striving to contribute to the same social outcomes can benchmark their performance, learn from one another and potentially replicate elements of successful projects.

The platform will be open to, and is designed to be of benefit to, organisations of all types and sizes, governments, existing networks, academics, social entrepreneurs and other stakeholders working to address social issues.

Anticipated challenges

While the intent is noble, it is likely that organisations and evaluators who choose to use Amplify’s Indicator Engine and Yardstick platform will encounter challenges.

  • What if the indicators don’t meet the needs of the project managers to understand how their intervention is working?
  • How will the identified instruments be further tested with Australian populations, including Indigenous populations?
  • What certainty will users of the platform have about data quality?
  • Is benchmarking the right term? Is the intention to define a certain standard?
  • What other information do you need to understand the drivers of differential outcomes so that benchmarking can serve its purpose of enabling organisations to make improvements?

We believe benchmarking could bring new opportunities and challenges to the sector.

Imagine a hypothetical scenario involving a youth job creation program in a large urban area with a similar program rolled out in a regional/ remote community. An important outcome of both programs is enhanced job satisfaction, so they use the same indicator but achieve different results. From an evaluator’s perspective, the Amplify platform may be useful as it could enable helpful comparisons of data, but it might also fail to collect (accurate) data needed to sufficiently understand contextual drivers of difference.

It's possible that other actors could use Amplify for different purposes, such as performance monitoring. Having accurate, and validated, data could also be important to funders, who have limited resources but need to make decisions around which program(s) to fund or not fund going forward. The possibility of funders using Amplify to monitor an organisation’s performance may deter some organisations from participating in the process as they may fear that their performance (according to the data) is not sufficient enough to attract additional/ new funding. We feel that benchmarking needs to be done carefully to maintain a collaborative spirit between organisations. It shouldn’t be about who’s outperforming who.

What next?

Despite these issues, there are reasons for evaluators to be excited about the Amplify Social Impact project and CSI’s work.

So far, they have only secured around 50% of their funding target  – securing the outstanding funding will enhance their ability to work closely with organisations designing and implementing social purpose programs. In turn, this could create new opportunities for strengthening outcomes measurement.

The Amplify project is likely to generate further discussion around indicators, benchmarking and addressing social issues. This is positive as it could lead to individuals, governments and organisations becoming more aware of these issues, sharing learnings and increasing opportunities for collaboration and replication.   

A cynic’s response

ARTD Partner, Andrew Hawkins, believes this endeavour should be entered into with great caution and humility in what can be achieved.

Reliable and valid measures are great, but they don’t ensure reliable and valid measure of outcomes. Most psychometric scales are designed to measure a current state, self-esteem, coping skills etc. These scales are not usually designed to be sensitive to changes as a result of a program, especially those operating in a complex adaptive system (i.e. society) where changes may be non-linear, and for which standardised measures may not be sensitive.

More fundamentally, the method by which measures are taken and the completeness of the data is a huge potential source of error that may well dwarf the benefit of similar measurement tools. What if one organisation is handing out the measurement tool at the end of a session while the program staff are present while some mail it out and get low response rates?  What if one has a control group with random allocation to deal with attribution error and another does not? There are innumerable ways that error can seep into pollute so called ‘equivalent’ measures of outcomes. And then, if we follow Rossi’s law – that the more rigorous the measurement the more likely that the result is ‘no effect’ – we could end up funding organisations with less rigorous measurement and programs with easy to ‘measure’ outcomes or relatively ‘easy to change’ cohorts.  

While there is merit in providing and using similar reliable and valid measures across interventions, it would be dangerous to create ‘league tables’ of organisations and programs. It would be wholly unacceptable to allow it to happen by publishing data when the actual measures are not taken in a standardised and systematic manner and shown to be sensitive to the changes an intervention is making.

…………….

We look forward to attending the AES Professional Learning session on ‘Harnessing the promise and avoiding the pitfalls of machine assisted qualitative analysis’ on 2 May presented by ARTD’s Jasper Odgers and David Wakelin and our partner at Altometer BI


Safer Pathway evaluation released

Aricle Image for Safer Pathway evaluation released

April 2019

By Ruby Leahy Gatfield and Fiona Christian

ARTD’s evaluation of Safer Pathway found the program is delivering a consistent, effective and timely response to victims across NSW.

Safer Pathway is a key initiative under the NSW Domestic and Family Violence Blueprint for Reform 2016–2021, led by Women NSW and delivered by the Department of Justice. Rather than an individual service or program, Safer Pathway provides state-wide service system infrastructure designed to ensure all victims of domestic and family violence in NSW receive a timely, effective and consistent response, regardless of where they live. It offers victims a tailored, coordinated service based on their needs and the level of threat to their safety. 

Women NSW engaged ARTD to conduct an independent evaluation of Safer Pathway over 2017-18. 

What did we find?

Using a realist-informed, mixed-method approach, we found that the initiative has been implemented largely as intended and is generally meeting its intended objectives. As a result of Safer Pathway:

  • victim’s safety is being routinely assessed by NSW Police and victims at serious threat are being prioritised throughout the Safer Pathway service response
  • a single, streamlined referral pathway has replaced the previous service fragmentation and duplication, helping victims to access the support they need and facilitating information sharing between service providers to prevent threats to a person’s life, health or safety
  • there is now a standard level of service for victims across NSW, with victims at high risk now receiving a more consistent, coordinated response across NSW and across service providers.

We also identified 23 recommendations to improve the service model and delivery. These related to:

  • expanding referral pathways from other agencies and services
  • continuing to provide and strengthen state-wide training
  • revising the current Domestic Violence Safety Assessment Tool to enhance its predictive ability
  • investigating strategies to engage hard-to-reach groups and address service gaps
  • developing and implementing more systematic data collection for monitoring and evaluation.

Download the full report here: https://www.women.nsw.gov.au/download?file=650328

What’s next?

Women NSW and partner agencies are using the evidence and recommendations in the report to strengthen the service response to victims of DFV in NSW, documented in their response to the report.

Hayley Foster, NSW Director of the Women's Domestic Violence Court Advocacy Services (a key Safer Pathway service provider) is calling on all political parties to “pay attention to these important evaluation findings and what they tell us about what is working and what is needed.”


Reducing our waste with Bin Trim

Aricle Image for Reducing our waste with Bin Trim

March 2019

By Ken Fullerton

Waste reduction is an important part of our commitment to be an environmentally sustainable business. We believe that it is not only good for people and the planet, but also saves costs and highlights our intention to act rather than just talk about sustainability issues.

According to the NSW Environment Protection Authority (EPA), about 70% of items placed in a general waste bin can be reused or recycled instead of being sent straight to landfill. Each year, NSW businesses send more than 1.8 million tonnes of waste to landfill. A 70% reduction would prevent 1.3 million tonnes of waste ending up in landfill.

We have had a long-term commitment to become more environmentally sustainable. To help us identify opportunities to reduce our waste, enhance our existing recycling systems and further educate our staff, we recently underwent a free Bin Trim waste and recycling assessment.

What is Bin Trim?

Developed by the EPA’s Business Recycling Unit under the Waste Less, Recycle More initiative, Bin Trim aims to ‘help businesses take action on waste.’ We first learnt of the program when we conducted an outcomes evaluation of the first phase of the Waste Less, Recycle More initiative for the EPA from early to mid-2018.

It is the largest waste and recycling funding program in Australia. By undergoing an assessment, we joined over 22,000 businesses across NSW who have committed to helping to protect the environment by promoting increased recycling.

In 2017, the NSW Government announced an extension to the Waste Less, Recycle More initiative and committed an additional $337 million over four years from 2017–2021.

What did the assessment involve?

Our assessment involved an EPA approved assessor from Cool Planet visiting our Sydney office.

In our initial meeting, the assessor helped us better understand our existing waste and recycling processes, the different types and quantities of waste we generate as a business and different recycling options available in our area. He suggested we set up two new dedicated recycling bins to separate our recycling materials – one for paper and cardboard waste and one for products such as tins and plastic containers – and provided these bins (free of charge).

The new bins complemented our existing dedicated bin for soft plastics. Our soft plastics waste – such as plastic bags and packet wrapping – is later transferred into a REDcycle drop-off bin. It is used to produce a range of products including decking, furniture, signage and even new roads.

Following the visit, our assessor prepared a tailored action plan and registered us on the EPA’s online Bin Trim App. This enables our recycling methods and achievements to be reported and for us to become part of an online community of businesses committed to increasing recycling across NSW. It’s a way to learn about other innovative recycling approaches businesses may be using.

In his second office visit, our assessor explained the action plan. This included putting up dedicated signs beside our existing and new bins to make it clear exactly what can and cannot be recycled as well as the specific bin waste items.

What was the impact?

We are actively using our new bins to separate and recycle waste items and have observed less waste being disposed of in our general waste bin. We have also updated our contract with our cleaning company to ensure that our office waste is put into the correct recycling channels once it leaves our office.

Our staff have commented that the Bin Trim signage has been helpful and there have been more informal office conversations about what we should do with our waste and where it ultimately ends up. We are always on the lookout for new ways to become more sustainable – from recycling office materials and using energy efficient lightbulbs, to purchasing eco-friendly products and tending to our worm farm, already much loved by our staff!

The experience was an educative one and we would certainly encourage other businesses across NSW to take advantage of the EPA’s Bin Trim program by signing up to receive their own free assessment. Reducing waste and increasing recycling requires collective effort at the individual, organisational and structural levels. Through businesses making small improvements we can make substantial positive impacts on the environment.


How can evaluation support (not thwart) community development?

Aricle Image for How can evaluation support (not thwart) community development?

March 2019

By Jade Maloney and Ruby Leahy Gatfield

As evaluators, it’s tough to hear the criticisms levelled at evaluation by community development theorists and practitioners. But, listening to their their perspective, we can see how certain approaches to evaluation – coupled with certain expectations from funders – could thwart rather than support community development initiatives.

What’s the problem?

Traditional formative and summative evaluation don’t align with the iterative and emergent nature of community development. At the start of a community development initiative, it is not clear what it will look like, how it will be delivered, or even what outcomes it will aim to achieve. Initiatives continue to evolve to reflect community needs – they don’t become settled models with pre-determined outcomes.

Community development practitioners question how evaluations will be used in funding decisions, the usefulness of evaluation for their purposes, the resources involved, and whether evaluation is up to the task of capturing the value of their work.

What evaluation approaches might better serve community development?

There are more ways to approach evaluation than we can count. If evaluators are to truly support not thwart community development, we need to understand the context – the complex adaptive systems – into which interventions are developing and align our practice with the philosophy of community development by being organic, responding to community processes and sharing ownership.

In our experience evaluating community development, the following approaches are more appropriate and particularly useful.

  • Developmental evaluation (Patton, 2011): this approach recognises the iterative and emergent nature of community development and provides a framework for systematically supporting evidence-informed decision-making about the ongoing development of an initiative. Accountability is centred with those delivering the initiative and tied to their values.
  • Empowerment evaluation (Fetterman, Kaftarian & Wandersman, 2015): this approach was designed for community organisations, and gives them ownership of the evaluation, with the support of an evaluator as a ‘coach’. The principles align well with those of community development.
  • Principles focused evaluation (Patton, 2018): this approach enables an assessment of principles – whether they can guide the work, and whether they are useful, inspirational, developmental and evaluable. This is useful for community development, which is generally guided by principals not set program models.

Where can I read more?

Check out our article in a special edition of the Social Work and Policy Studies: Social Justice, Practice and Theory journal for more detail and a case study of how we’ve applied these approaches to a national community development initiative.

There is also a lot of other great content in the special edition, which comes at an important time for community development. As Howard and Rawsthorne identify in their editorial, we need to take care that as we benefit from the shift toward individual choice and control, we do not lose the value of collective ideas and actions. In their article, Hirsch et al. outline the implications of the changing face of disability and refugee services and Mapedzahama describes the significance of race in community development in Australia. These articles identify important considerations for those working with community in Australia, including evaluators.

You can also read our tips for monitoring and evaluation community development from the Ability Links NSW Community Development Resource Package, which was developed to think about inclusion of people with disability in community development. And our blog about our use of developmental evaluation with Dementia Australia.


How can evaluators #BalanceforBetter?

Aricle Image for How can evaluators #BalanceforBetter?

March 2019

By Jade Maloney and Ruby Leahy Gatfield

This year's International Women's Day challenged us to think about what we can do to #BalanceforBetter. As our staff celebrated today, we started to think about what we can do as evaluators to support a more balanced world.

Have you heard of feminist evaluation? It draws on feminist theory and isn’t so much an approach as a way of thinking about evaluation. Feminist evaluators may draw on participatory, empowerment and democratic approaches to evaluation – valuing diverse voices. Feminist evaluation recognises that evaluation is inherently political – it is not only stakeholders who bring particular perspectives to an evaluation, but evaluators too – and encourages evaluators to use evidence to advocate for changes that addresses gender inequities. Want to find out more? See this Better Evaluation blog on Feminist Evaluation.

Better Evaluation also has a page on Gender Analysis, which explains the difference between definitions of gender as category versus gender as process of judgement and value related to stereotypes of femininity and masculinity, and suggests steps for gender analysis in evaluation.

UN Women also have resources: Inclusive Systemic Evaluation for Gender equality, Environments and Marginalized voices (ISE4GEMs): A new approach for the SDG era, which provides theory and pratical guidance, and an Evaluation Handbook: How to manage gender-responsive evaluation, which provides guidance on gender-responsive evaluation in the context of UN Women, with links to a range of tools.

As evaluators, we’re often encouraged to think about how we can evaluate programs at both a state and agency level with reference to the United Nation’s Sustainable Development Goals (SDGs). Goal 5 recognises that Gender Equality ‘is not only a fundamental human right, but a necessary foundation for a peaceful, prosperous and sustainable world.’ We caqn bring the targets for this Goal – such as ending all forms of discrimination against all women and girls everywhere – into our evaluations to measure progress.

Regardless of whether we practice feminist evaluation or consider SDG Gender Equality targets, we should always be aware of the power dynamics in evaluation. At the Australian Evaluation Society (AES) Conference in Launceston last year, Tracy McDiarmid (International Women's Development Agency), Amanda Scothern (International Women's Development Agency), Paulina Belo (Alola Foundation) had us play out gendered power dynamics and how these could be disrupted using performative methods.

With the sub-theme ‘Who should hold the box? – Questioning power, exploring diversity’ at this year’s Australian Evaluation Society (AES) Conference in Sydney, we hope to see others bringing creative techniques to enable evaluators to address inequities and #BalanceforBetter.

 


3 things I learnt as an Aboriginal intern

Aricle Image for 3 things I learnt as an Aboriginal intern

March 2019

By Research Assistant, Holly Kovac

I am a proud Booroobergonal woman of the Darug nation. My mother, too, is a Booroobergonal woman and my father is Croatian.

A little over a year ago, at 19 years old, I started a 6-month internship at ARTD Consultants. During this time, I was exposed to the world of evaluation and public policy and what it really means to be a part of a professional working environment. ARTD has not only set me up with professional skills, but skills I can carry with me for the rest of my life.  

Here’s what I learnt in my time as an intern.

1. How to be confident in my culture and background

As a young Aboriginal woman, I always had some connections to my family and culture. However, it wasn’t until I started university that my passion and drive to build my connections and give back to community really kicked in.

Working at ARTD has helped me with this. It has allowed me to meet people from all over Australia and has taken me outside of the bubble of where I grew up. Being able to work with Aboriginal people and communities, listening to their stories and where they truly come from, has shown me the richness and value of what it means to be Aboriginal and given me an even deeper sense of pride.

I was also fortunate to have an amazing Aboriginal mentor, Simon Jordan, who led me through a journey of discovering my identity. He helped me realise that no matter how connected or disconnected you are from our culture, there is always room for you to make your own path and reconnect. Embracing culture and being confident in who I am now gives me the momentum to break the cycle and start giving back to community and country.

2. How to be organic and think on the spot

Early on in my internship, I had the opportunity to go out on field work to collect qualitative data for ARTD’s evaluation of World Vision Australia’s Young Mob Leadership Program.

I learnt very quickly that when out on field work, you never know what challenges you are going to face ­– from facilitating on the spot to connecting with boisterous students. At one of the first workshops I went to, I was thrown in the deep end when given an on-the-spot chance to help facilitate. I took ten minutes to expel my nervous energy and then it was show time. By the end of the workshop, I was happily leading the reflective discussions.   

This experience really boosted my confidence and shone a light on my natural engagement skills. I’ve since helped facilitate focus groups and conducted interviews with students in the program. I’ve found that throwing yourself in the deep end of challenging situations, is probably the best way for me to learn and gain the confidence to field any challenging questions that are thrown at you.

3. How to use work as a place to channel your energy

Life is full of distractions, from university stressors to managing family and relationships. The internship taught me that work can be a great place to focus your energy into something you’re passionate about.

ARTD has been a place for me to grow up – helping to bring balance into my life and fostering a great sense of accomplishment and pride. Excuse the cliché but working here really has taught me that change and adversity can make you stronger and will always teach you valuable life lessons.

Coming into ARTD changed my life for the better – they have supported me through thick and thin over this past year and I am so grateful to be in such a unique and understanding working environment.

What’s next in store?

Following my internship, I came on board at ARTD as a Research Assistant. While I’m still finishing my nursing degree, I’m keen to continue working in the public policy space – learning more about the Indigenous and health policy sectors and evaluation more broadly.  

I’m also supporting the development of ARTD’s Reconciliation Action Plan and excited about having a voice in future projects.

PHOTO CREDITCameliaTWU on Flickr. 


The many ways we can un-box evaluation

Aricle Image for The many ways we can un-box evaluation

February 2019

By Partner, Jade Maloney

To outsiders, evaluation can seem like a mystery and something to fear. While evaluators see its potential to inform better public policy, the evidence says evaluation does not always live up to it’s potential.

This is part of why our theme for this year’s Australian Evaluation Society Conference in Sydney is ‘Evaluation Un-boxed’.

I shared my initial thoughts about why and how we should un-box evaluation in the first of a series of blogs for Better Evaluation. I mentioned the need to open up evaluation to end users and to be open to what we can learn from community, thinking about the basis on which we value and how we engage with lived experience. I also questioned how the idea of un-boxing evaluation fits with conversations we are having about pathways to advance professionalisation within the context of the Australasian Evaluation Society.

Comments from other evaluators reminded me that un-boxing evaluation is also about un-boxing evaluation reporting. How could I have neglected one of my favourite topics?

Then I got to continue the conversation with Carolyn Camman and Brian Hoessler on their podcast, Eval Café. We found many more ways we can un-box evaluation – by translating the jargon, making it meaningful, getting it integrated into practice, and bridging professions. All of this raised big questions about who we are and how we go about our work.

How do we un-box reporting?

Much has been said about evaluation reports gathering dust on shelves. If we want evaluations to be used in a world of competing demand and information overload, do we need to rethink evaluation reporting? The short answer: yes. How? Let me count the ways. At the 2018 Australasian Evaluation Society the conference in Launceston, evaluators came up with a range of ideas for evolving the evaluation deliverable in a session facilitated by my colleague, Gerard Atkinson.  

But un-boxing reporting is about more than finding new and engaging formats. It’s about finding ways to enable shared learnings across evaluations of similar initiatives, so that evaluation contributes to the broader knowledge base.

And, for me, it’s also about focusing on the process as much as the product. If you’re engaged and learning on the journey, the report at the end becomes less important. As I learned in my creative writing degree – you can’t control what people do with your writing once you let it out into the world. But if you’re having the right conversations along the way, the product will more likely be based on shared understandings, and useful and used.

Do we all need to call ourselves evaluators to un-box evaluation?

Like most people, I fell into evaluation from somewhere else – communications and creative writing. Not all of us are solely evaluators. And some of us don’t call ourselves evaluators in general conversation.

Carolyn asked if this is a problem? My first answer was yes – even though I’m one of those people that doesn’t call myself an evaluator – because if we’re not out there promoting what we do, how will people know the value of evaluation in a world where co-design, behavioural insights, implementation science, social impact measurement, and customer experience are on the rise?

But then Carolyn questioned whether this is placing too much emphasis on the “evaluator” instead of “evaluation”. Perhaps.

If the end game is integrating evaluation into practice, not everyone involved in the ‘doing’ of an evaluation would be an evaluator. They would just need to think evaluatively. But what does this mean for professionalisation?

Bridging or un-boxing?

Professionalisation could help with coherence and communication – enabling outsiders to better understand what this thing we call evaluation is. However, as Patricia Rogers and Greet Peersman found diverse competencies are required for different types of evaluation. Pinning us down is not so easy.

As evaluators, we need to draw on myriad skills – from facilitation to statistical analysis. Paul Kishchuk has suggested we look at evaluation as a bridging profession. Brian tied this back to bonding and bridging ties. What professions do we draw from as well as inform? 

Continue the conversation

There’s a lot more to un-boxing evaluation and we’re keen to continue the conversation.

Tune into the Eval Café podcast.

Submit a presentation proposal by March 7 and join us in Sydney in September.

Join the conversation on the Better Evaluation blog.


TFM: working with, not for

Aricle Image for TFM: working with, not for

February 2019

Ruby Leahy Gatfield and Sue Leahy

The 2019 Their Futures Matter Conference, held in Sydney on 11 February, was an important reminder of the need to work with, not for, communities, when building evidence of outcomes.

Their Futures Matter (TFM), a landmark, whole-of-government reform designed to improve outcomes for vulnerable children and families in NSW, is committed to using evidence and evaluation to inform service design, outcomes measurement and investment. As put by TFM Executive Director, Gary Groves, ‘gone are the days that government funds something that sounds nice, without real rigour around outcomes. It’s nice to have 100 referrals walk in the door, but now I really want to know the outcomes for those referrals.’ The reform promises that evidence, monitoring and evaluation will drive continuous improvement across all areas of the system response and service delivery.

While we’re excited about this, it’s important to remember the many ways that different communities can understand, define and measure outcomes. The way that programs are designed and measured should be decided in close collaboration with community. While standardised measurement tools and RCTs have their place, a program or intervention that works for one cohort may not be appropriate for another.

This point was made best by Conference Chair and international affairs analyst, Stan Grant, who reflected honestly that as a child, he lived with many of the risk factors that predict poor life outcomes – living itinerantly, attending multiple schools, and having an incarcerated parent, not to mention the intergenerational trauma of the stolen generation. Today, a ‘Safety and Risk Assessment’ may have allocated him as a ‘high-risk’ child. Despite this, he stressed that ‘the very worst thing that could have happened’ would have been for the state to remove him from his family. He believed that the genuine love and care of his family outweighed the risk factors. Stan’s case, like many, underscore the need to recognise non-western views on safety (and other outcomes more broadly).

At the conference, ARTD also showcased our recent successful experience of working with, not for, community under TFM’s Aboriginal Evidence Building Partnership pilot. The pilot aim was to build an evidence base of promising programs and services that are improving outcomes for Aboriginal children and families. It did this by linking Aboriginal service providers with evidence building partners to work together to build providers’ data collection and evaluation capabilities.

Key to the success of our two pilot partnerships was our commitment to a partnership approach – establishing mutual trust and agreed ways of working early on, communicating regularly and openly, rescoping workplans to best meet current and future needs, and demonstrating an unwavering commitment to capacity-building and self-determination.

While the pilot required providers to embed standardised, validated tools to measure wellbeing outcomes, we worked closely with them to identify their additional data collection needs. We did this to ensure the data collected reflected what was most important to the service and its community and could be used on the ground to inform and improve program delivery. We are excited about the rollout of the pilot this year, and TFM’s commitment to a partnership-driven and capacity building approach to ensuring the service system meets the needs of Aboriginal children and families.

Conference keynotes also shone light on some of the other encouraging achievements of TFM to date. These included having more than 1,000 families engaged in new family preservation and restoration programs and establishing the first, human services cross-agency, longitudinal (+25 years) data set in NSW, providing large scale, de-identified matched data.

TFM Program Director & Investment Approach Lead, Campbell McArthur, said that beyond having what he deemed ‘the best dataset in Australia’, it’s the insights that the data can bring us that are truly exciting. The commitment to evidence means we are better able to compare and contrast the experiences of different cohorts, to better understand what works, for whom, in what circumstances. It also allows the system to identify population-level trends earlier and take evidence-based responses.

Attended by over 700 government and non-government representatives, we left the conference ready for the work ahead under TFM. Director for Children and Families in Scotland, Michael Chalmers, reminded us that ‘joining up services and creating change takes time and is difficult. But its important to keep your eyes on the prize: improving outcomes for our most vulnerable children and young people’.


How can we measure empowerment?

Aricle Image for How can we measure empowerment?

February 2019

By Ruby Leahy Gatfield

Empowerment has emerged as another buzz word. While it is often paid lip service, what does empowerment really mean? And how do we know if we’re achieving it?

Empowerment is a complex concept. It is both a process and an outcome that can be seen at the individual, organisational and structural levels, enabling positive growth and sustainable change.[1] In the context of Aboriginal and Torres Strait Islander communities, who have experienced a history of systematic oppression, empowerment is critical for healing and improving overall wellbeing.

While there are many programs and interventions designed to empower Aboriginal people and communities, there is little quantitative evidence about their impact. Here’s where the GEM – the ‘Growth and Empowerment Measure’ – steps in.

On 1 February, Melissa Haswell from the Queensland University of Technology, facilitated GEM training at the National Centre of Indigenous Excellence. She explained that the GEM is the first validated, quantitative tool designed to measure empowerment in Aboriginal communities.

Based on extensive consultation, the tool measures different dimensions of empowerment as defined by Aboriginal people. It also aims to be a strengths-based and empowering process in and of itself, using scenarios as a way for people to trace their personal journey.

So, what does the GEM mean for evaluators? Firstly, it is a culturally-safe tool, developed by Aboriginal people, for Aboriginal people. When working with Aboriginal people in social research and evaluation, it is crucial that we use culturally-appropriate methods that:

  • respect Aboriginal knowledge and practices
  • are strengths-based
  • are framed by an understanding of the historical context of Aboriginal communities and its ongoing impact
  • benefit the community.

The GEM offers just that. It also comes in the context of the growing development of other validated and culturally-appropriate measurement tools, such as Dr Tracey Westerman’s upcoming tool for measuring the cultural competencies of child protection staff. 

More broadly, the GEM also enables us to quantifiably measure more holistic wellbeing outcomes, beyond discrete system indicators, such as increased school attendance or instances of out-of-home care. It recognises empowerment as fundamental to the overall health and wellbeing of individuals and communities, giving programs and services a better understanding of their impact. This is particularly important, given the lack of evidence about ‘what works’ for improving the wellbeing of Aboriginal people, families and communities.[2][3]

To support Aboriginal organisations to embed the use of the GEM and other subjective measures of wellbeing, we are working with Their Futures Matter to develop tools and resources to support evidence building across the Aboriginal service sector. Stay tuned for more…


[1] Haswell, M.R., Kavanah, D., Tsey, K., Reilly, L., Cadet-James, Y., Laliberte, A., Wilson, A., & Doran, C. (2010). Psychometric validation of the Growth and Empowerment Measure (GEM) applied with Indigenous Australians. Aust N Z J Psychiatry. 44(9): 791–9. Available at: https://www.ncbi.nlm.nih.gov/pubmed/20815665

[2] Productivity Commission. (2016). Overcoming Indigenous Disadvantage: Key Indicators 2016. Canberra: Productivity Commission.

[3] Stewart, M. & Dean, A. (2017). Evaluating the outcomes of programs for Indigenous families and communities. Family Matters. No. 99, pp.56–65.


The future of evaluation is within

Aricle Image for The future of evaluation is within

January 2019

By Jack Rutherford

When my friends and family ask me what my job is, I say something along the lines of, “I work at a public policy consulting firm. We mostly evaluate government policies and programs, and we gather our data mostly through surveys and interviews.” Knowing that I majored in biology, they tend to follow this by questioning whether I want to work in a field they see as so far removed from biology and science.

While the work I do to at ARTD feels meaningful and fulfilling, I’ve been left wondering… will I be able to access the parts of biology that I love while working in public policy?

Can we marry biology and evaluation?

When I saw that the 2018 ACSPRI Social Science Methodology Conference was discussing how to integrate social and biological research, I jumped at the opportunity to attend.

I attended the first day of talks at the University of Sydney on 12 December. The conference featured diverse expertise from a range of local and international backgrounds, including Naomi Priest from the Australian National University and Melissa Wake from the Murdoch Children’s Research Institute, Tarani Chandola from the University of Manchester and Michelle Kelly-Irving from the Université Paul Sabatier in France. Their talks highlighted ways biological concepts and methodologies have and can contribute to social research, with a focus on the use of biomarkers in social studies.

What are biomarkers?

To many, biomarkers are a new concept. Put simply, a biomarker is an objective measure of biological processes. For example, increased blood pressure can be used as a biomarker for increased levels of stress.

Chandola explained that the benefits of biomarkers include reducing measurement errors that can arise in surveys and being able to tell more holistic stories than those derived purely from self-reported data. In the example of stress, participants may underreport how stressed they feel or report not feeling stressed at all, while their blood pressure and other biomarkers suggest otherwise.

What do biomarkers mean for evaluation?

Because biomarkers can be used to determine the effects of the social environment on humans, they can be used in evaluation to provide enlightening data on complex social issues. For example, Kelly-Irving spoke about how adverse childhood experiences impact the physiology of the adult. According to her research, adverse childhood experiences tend to become more frequent with increasing social disadvantage. These experiences have neurodevelopmental impacts, which can affect health mediators (such as one’s likelihood of smoking, their BMI etc.) and social mediators (such as educational attainment), which all later influence mortality.

Priest also presented findings showing how different types of racial discrimination affect children’s health by increasing their BMI, waist circumference and blood pressure.

If biomarkers can be used as indicators of the effects of the social environment, and a public policy or program aims to affect social change, then biomarkers can be used to measure the effectiveness of said policy/ program. For example, if we were evaluating a program designed to combat adverse childhood experiences, we could compare the biomarkers of those who took part in the program with those who did not. 

While it may seem complicated, biomarkers have the potential to enable social researchers and evaluators to draw clearer pathways between cause and effect. This is particularly useful in an increasingly complex social environment.

Are there any risks?

Despite their benefits, biomarkers don’t come without their own set of challenges. For one, sampling participants’ biology needs a heavy consideration of ethics regarding consent, risk, and data and sample security. Asking participants for samples of biomarkers may result in increased opt-outs and reduced sample sizes, therefore impacting the predictive power of significance testing.

Sampling methods for certain biomarkers may also be time and resource intensive, particularly when training is involved.

It’s also important that this sort of research isn’t used to exacerbate inequalities any further, i.e. that results are not presented in a way that provides certain groups with perceived biological evidence for their prejudices.

Using biomarkers also requires careful consideration of the conceptual framework. Before incorporating biomarkers into a project, evaluators must be certain they are valid indicators of the particular aspect of the social environment. Sometimes one biomarker is not descriptive enough. Indeed, Kelly-Irving found that a model comprised of many biomarkers was better than a model using just one in determining the impacts of adverse childhood effects.  

What does the future hold for biology in evaluation?

The conference suggested that as our collective understanding of biological concepts, methodologies, and data increases, it will become easier to integrate the biological and the social. Social research and evaluation will benefit from a rich data source, with potential to support our understanding.

Generation Victoria, an ambitious project directed by speaker Melissa Wake, aims to gather biological and social data from all children born in Victoria between 2021 and 2022 with the goal of addressing social epidemics like school failure, obesity and mental health. Far-reaching projects like this become possible by merging the biological and the social.

On a more personal level, I find it exhilarating that I might be able to marry my passion of biology with the work I do at ARTD. Moving forward, I aim to look for and shape opportunities to integrate this thinking into our work, and I encourage you to do the same.

In using our own biology as measures of the effects of the social environment, the future of evaluation is, quite literally, within us all!