News & Blog

Part 4: Structure it

Aricle Image for Part 4: Structure it

June 2019

By Jade Maloney

Making evaluation a process not just a product can enhance the likelihood of your findings being used. But at some point, you’re also going to have to deliver the product – the report.

By this point, particularly in a large-scale, long-term and/or mixed-methods evaluation, you’ll have a lot of data. It can be hard to know what your story is and where to begin with structuring it into a coherent report.

Some writers work by constructing the frame first; others find the story through the writing. Both approaches are fine – you’ve got to go with what works for you. But if you don’t start with the framework, you’ll have to retrospectively construct it. This means re-ordering what you’ve written so your message is clear.

Know your message

A clear structure comes from a clear message. As Max Rixe recently wrote in an article on Writing capability in the public sector for The Mandarin, ‘Good writing is clear thinking made visible.’

You can’t work out how to order information without knowing what it is you are trying to say. It would be like building a house in the dark. Are you telling a story of triumph, tragedy or transformation?

If it’s a tragedy – aka it didn’t work out like we thought it would – you’ll also need to think about how you can make this palatable. Can you create a positive sandwich?

Don’t cut it by data source

There is no one best way to structure an evaluation report – because the focus and findings always differ. But this doesn’t mean anything goes.

Don’t structure your report by data source. This doesn’t tell a story and leaves the hard work of piecing it together to your audience. It’s an evaluator’s job to synthesise, not just describe, the data.

Better options are structuring by:

  • levels of the program logic
  • key evaluation questions
  • program components
  • streams of beneficiaries.

Which one of these will work best depends on the kind of content you are working with and the composition and interests of your audience. (We spoke about knowing your audience in the second blog of this series).

Learn from journalism

The concept of telescoping speaks to my journalistic instincts to make it easy for the reader to jump in and get what they want, then drop out. Telescoping means thinking of each level of your report as telescoping outwards, empowering each reader to choose the level of detail they need – like they can in a traditional news story.

  • The key findings one-pager gives all audiences the low-down at a glance.
  • The executive summary provides key information for everyone, including busy executives.
  • The report body provides primary analysis and evidence – the most important content to support the key findings and recommendations in the executive summary.
  • The appendices provide detailed information, detailed processes and analysis and background research – the content that the technical specialists may want to check or that only particular audiences need to know.

As a journalism student, ’Don’t bury the lead’ was hammered into me. But evaluators have taken on the mores of academic research – with long introduction and methods sections – ahead of the good stuff. While we’re not journalists – and we shouldn’t take up all of their tricks – in some cases, we might better meet the needs of our audiences if we tried flipping our reports and starting with the outcomes.

Test it to make sure it’s sound

When drafting and reviewing your structure, ask yourself:

  • Will it help your audiences quickly grasp what they need to know and avoid the detail they don’t?
  • Does it give emphasis to what’s most important?
  • Does the structure support the argument – the story?
  • Does each chapter logically flow from the other? Or are there chapters that should be switched around because you need the information in one to understand the other or the implications of it?
  • It is clear what each chapter is about?
  • Are chapters balanced in length?
  • Are there repetitions and overlaps between chapters and sections?
  • Are there gaps?

The next in our communication for evaluation series will cover telling stories for evaluation.


The Costs and Benefits of Economic Analysis in Evaluation

Aricle Image for The Costs and Benefits of Economic Analysis in Evaluation

June 2019

By Georgia Marett

Economic evaluation is increasingly becoming an integral part of evaluation. Much of our work includes some assessment of cost effectiveness or benefits relative to costs. As a behavioural economist, I have long been interested in economically evaluating programs which are trying to produce complex benefits (rather than just monetary savings). It is important to look at the whole picture of a program so that an economic evaluation can appropriately capture the value of all benefits.

With this in mind, I recently undertook a course run by the University of Melbourne and attended a webinar run by the Dementia Centre for Research Collaboration. These both focused on economic evaluation techniques that are used to evaluate health programs. However, I believe that these techniques can be applied to other sectors.

What types of economic evaluation are out there?

There are a variety of economic evaluation techniques.

  • Costing: quantifies the overall costs associated with a social ill or problem (e.g. the cost to society of diabetes), without providing a solution.
  • Cost-minimisation: compares two programs with equal outcomes based on their costs to assess the cheapest way to achieve the same level of benefit. Note that this is usually only used in pharmacology studies where different drugs produce the same benefits.
  • Cost-benefit analysis: compares the costs and benefits of a program when both can be defined in monetary terms (this can be hard when dealing with intangibles, e.g. a life). The monetary value is used as a proxy for utility and enables comparisons across sectors and different outcomes.
  • Cost-effectiveness analysis: determines the relative value of different programs or interventions using a common, unambiguous outcome measure for all programs (e.g. deaths prevented).
  • Cost-utility analysis: a form of cost-effectiveness analysis in which outcomes are quality-adjusted on a scale (for example, using quality-adjusted life years).

In essence, the choice between the above options depends on the available resources (time and budget), data quality (cost and outcomes data) and the desired outcome—do you want to compare two programs, quantify the cost of your problem, or analyse the cost/benefit of one or more program/s?

Cost-benefit analysis

Cost-benefit analysis is one of the more widely-known forms of economic analysis and is often used as part of an evaluation to translate the benefits of a program into monetary terms.

The costs and benefits of a program are compared to determine a net benefit (or cost) along with a benefit-cost ratio (BCR). The BCR is calculated as the ratio of the sum of a program’s benefits, relative to the cost of the program. The breakeven point for the BCR is 1—where a BCR between 0 and 1 represents a net cost, while a BCR above 1 represents a net benefit.

This kind of analysis is appropriate when all costs and benefits can be translated into monetary units (i.e. costs and benefits are denominated in dollars only, there are no externalities or hidden costs/ benefits). However, this is not always the case. What if the program generates benefits that are hard to monetise, such as increased school engagement or increased confidence when dealing with service systems, for example? In this case, a cost-benefit analysis can be more difficult or even impossible.

Cost-effectiveness analysis

A cost-effectiveness analysis sidesteps the issue of monetising benefits by examining two programs (or one program and the business-as-usual condition) on one outcome measure. For example, you could compare two programs that aim to reduce heart disease on the outcome of ‘deaths prevented’. The formula for calculating an incremental cost-effectiveness ratio (ICER) is:

The ICER value can be used as a ‘decision rule’, whereby its value determines whether a program will be funded—if the ICER is too high then the program will be considered too expensive and not funded.

But what happens if your program has more than one aim? What if, for instance, your program could have an effect on physical and mental health? Unfortunately, a cost-effectiveness analysis is a narrow, uni-dimensional measure of success where disparate outcomes cannot be compared and multiple measures that would make up a general understanding of ‘quality of life’ are not able to be included. In addition, this type of analysis is only able to tell you the relative efficiency of the programs being compared, not their absolute efficiency. It will also not tell you if none of the programs are any good or consider the total cost of the programs (it only considers the relative costs of the two programs).

Cost-utility analysis

To include a measure of benefits that encompasses a wider range of possible positive outcomes, a cost-utility analysis uses what is known as a ‘preference-adjusted unit of consequence’. This value is derived from examining people’s preferences towards various benefits and non-benefits. It can be used to compare two programs that have vastly different outcomes.

In health economics, the most commonly used unit is the quality-adjusted life year (QALY) which considers both the quality and length of someone’s life when they are living with various conditions. QALYs are very interesting because of both the way they are developed and the way they are used. For more discussion on the advantages and disadvantages of QALYs see this paper.

In cost-utility analyses, QALYs (or some other measure) are used as the outcomes measure when calculating the ICER (as above in the cost-effectiveness analysis). In this case, the ICER is usually expressed as the incremental cost to gain an extra QALY. This gives a clear indication of the relative benefits of one program compared to another. It also allows for the comparison of a broader range of programs than cost-effectiveness analysis because the benefits do not have to be the same across the programs being compared.

Where does economic evaluation fit within a broader evaluation?

Economic analysis can add another dimension to evaluation as long as its limitations (and advantages) are adequately understood. Each program is different and economic analysis must fit in with the other aspects of an evaluation. The results must be treated sensitively and assessed alongside qualitative outcomes to get a full picture of the impact of a program. Acknowledging the strengths of the type of economic analysis being used and compensating for limitations by building-out other parts of the evaluation will ensure programs understand their full impact.

For more on how evaluators can assess value for money, check out Julian King’s Approach to Assessing Value for Money which sets out how economic analysis can be integrated into evaluation.


What National Reconciliation Week means to me

Aricle Image for What National Reconciliation Week means to me

May 2019

By Holly Kovac

Before proceeding, I would like to acknowledge and pay my respect to the past, present and emerging Traditional Custodians and Elders of Australia’s Aboriginal and Torres Strait Islander population. 

This week, across Australia, we celebrated National Reconciliation Week. A week where all Australians are encouraged to take a look at our shared history and explore how we can contribute to reconciliation as individuals and as a population.

What does reconciliation mean?

Reconciliation Australia defines reconciliation as about ‘strengthening relationships between Aboriginal and Torres Strait Islander peoples and non-indigenous peoples, for the benefit of all Australians’.

I have an Aboriginal mother and a Croatian father, so I grew up with two very different cultures behind me that supported me in different ways. My father has always been supportive of me embracing my Aboriginal heritage, even when my mother wasn’t around. Every major holiday, he would make sure my mother’s family was invited to our place or he would take us out onto country in Western Sydney to join in their celebrations. These are some of my greatest memories growing up, and I still look forward to seeing my family whenever I can. To me, this is what reconciliation means.

What are we reconciling?

There are two very significant dates that occur within Reconciliation Week; the anniversary of the 1967 Referendum and Mabo Day.

On the 27th May 1967, Australians voted to amend sections 51 and 127 of the Australian Constitution. Section 51 stated that federal laws that were designed to protect all Australians did not apply to the Aboriginal people in any state of Australia. This meant that the Commonwealth wouldn’t create laws for Indigenous Australia. Section 127 specified that ‘aboriginal natives should not be counted’ in relation to the census. When this changed, it gave Aboriginal and Torres Strait Islanders the right and recognition to be counted as a part of the nation. Without this referendum, Mabo Day wouldn’t exist.

Mabo Day celebrates the day when Eddie Mabo made history by taking on the high court to overturn terra nullius. Terra nullius is the Latin word for ‘land belonging to no-one.’ When Captain Cook landed in Australia in 1770, he declared the land for the Queen, even though our people were here and had been for at least 60,000 years.

While the Australian Government believed they owned the land, Eddie spoke about his belief of his peoples’ connection to the land in 1981. He was encouraged by a lawyer in the audience to take this to court. This developed into the Mabo Case. After 10 years, on the 3rd June 1992, the High Court of Australia overturned terra nullius and recognised land rights for Aboriginal and Torres Strait Islander people. This contributed to the development of the Native Title Act in 1993, which recognises the traditional rights and interest of land to Aboriginal and Torres Strait Islander people. Under this Act, Indigenous people can make applications to the Federal Court to have their connection to the land recognised. This means returning to it to live, teach tradition or to have the land protected (depending on certain circumstances).

These two significant dates have given the Aboriginal and Torres Strait Islander population a voice and had the Government face the truth about our history. The theme for reconciliation week 2019 is ‘Grounded in truth – Walk together with courage’ and, for me, this week has been about facing truth and reconnecting with my community. At the University of Sydney, I participated in a cultural healing workshop through DhabiyaanBaa Maarumali. This encouraged me to accept some truths to do with family and accept them for what has happened, since the past cannot be changed.

We can only heal together and work towards a reconciled future.


Don’t train and hope: Lessons from delivering and evaluating training programs

Aricle Image for Don’t train and hope: Lessons from delivering and evaluating training programs

May 2019

By Jade Maloney (with thanks to Brad Astbury)

I’m a big believer in the potential of training programs. I’m part of a company that provides everyone with a personal training budget and company-wide training days, has lunchtime learning every Wednesday, and supports Australian Evaluation Society programs. Training is part of the lifelong journey that is evaluation.

But I get it when my colleague, Brad Astbury, says the biggest problem with training programs is the philosophy of ‘train and hope’. Indeed, in the early stages of my career I attended training that I found interesting enough, but I failed to find ways to apply it.

The Kirkpatrick model

Kirkpatrick is a foundational author on training evaluation – he first published in 1959 before releasing the first edition of Evaluating Training Programs in 1994. While details have been revised, the four levels of his model remain the same.

  • Level 1: Reaction: To what extent do participants find the training engaging and relevant to their jobs?
  • Level 2: Learning: To what extent do participants acquire the intended knowledge, skills, attitude, confidence and commitment through participation in the training?
  • Level 3: Behaviour: To what extent do participants apply what they learned during training when they are back on the job?
  • Level 4: Results: To what extent do targeted outcomes occur as a result of the training and of the support and accountability package?

At a glance the levels make intuitive sense. And there must be something to them because the book is now in its Third Edition. However, there has also been a lot of criticism of Kirkpatrick’s model.

The gaps in Kirkpatrick

Bates (2004) sets out three key criticisms of Kirkpatrick’s model.

  1. The model is incomplete because it does not consider important delivery, individual and contextual influences, such as the learning culture of the organisation and the nature of support for skill acquisition and behaviour change.
  2. There is an (implied) assumption of causality between the levels.
  3. There is an assumption that information at each level is more important to understand effectiveness than the last.

These are relevant concerns. One of the key things that stands out to me is that the link between reaction and learning is tenuous. Do you need to enjoy training to learn? Maybe not. Some lessons are best learned by getting uncomfortable. For example, last year at the Australian Evaluation Society Conference, I went to a two-day training on cultural safety and respect with Sharon Gollan & Kathleen Stacey – we had to feel uncomfortable to begin to confront institutional racism and unearned privilege. I also think I could learn from training that wasn’t particularly engaging if it was content that I was required to master for my role. That’s not to say that being engaging is not at all important, particularly when you’re in the business of fee-for-service training.     

The leaps between learning, behaviour and results are also large. For this reason, Thomas Guskey has added another layer – organisational support – to Kirkpatrick’s model. This is important to making the leaps and avoiding the perils of the philosophy of train and hope.

To add to these concerns, I feel there is a level missing at the bottom of the model. Before you have reaction, you must have reach. This is why I find Funnel and Rogers (2011) information/ education program logic archetype useful. It is important to understand reach because without it you wouldn’t have a training program, and because your means of achieving reach may also influence your learning. Is the training mandated or voluntary? Is it crucial to your daily work and life or more peripheral? This might influence how you react and learn, or affect the level of organisational support to help embed learnings in practice.

So how do we avoid the philosophy of train and hope?

At ARTD, we think carefully about the training programs we access – not only whether they come recommended, but whether it is the right time to access this particular training, and whether we will be able to come back and integrate it into our practice. We also have a commitment to share what we’ve learned – through this blog and internal learning sessions. This is not only useful because it supports a culture of learning and an environment of organisational support, but because teaching others helps us master new knowledge.

When we deliver training, it is applied. We don’t just teach the concepts but give people the opportunity to practice them. Our preference is to provide follow-up mentoring to support the leap between learning and behaviour in daily practice. In a recent training session, I also channelled my behavioural insights colleagues, using a commitment device: I asked everyone to share aloud one thing they were going to change in their practice after this session. I’m keen to see how this one works!

References

Bates, R. (2004). ‘A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence,’ Evaluation and Program Planning vol 27, pp341–347.

Kirkpatrick, D.L. (2006)., Evaluating Training Programs

Funnel and Rogers. (2011). Purposeful program Logic.

The evaluation exchange. (2005-06). ‘A Conversation with Thomas R. Guskey’. Harvard Graduate School of Education.


Elevating lived experience in evaluation

Aricle Image for Elevating lived experience in evaluation

May 2019

By Jade Maloney

With the rise of co-design, co-production and co-delivery, lived experience is being increasingly valued in policy and program design and delivery.[1] What does this mean for evaluators?

In research, there is a growing body of evidence about the role of peer researchers. Having someone with lived experience on the team can help to address power imbalances and enhance understanding, if they are well supported. The active involvement of a consumer researcher in all stages of the process can create powerful mutual learning.[2]

In evaluation, we have collaborative, participatory and empowerment evaluation approaches, but these involve varying levels of stakeholder involvement and ownership and don’t necessarily involve people with lived experience as evaluators.

Recognising the value that diverse lived experience can bring to evaluation, alongside the need to evaluate their programs, and the barriers that their members face in translating their experience and qualifications into professional career opportunities in Australia, the Asylum Seeker Resource Centre established the Lived Experience Evaluators Project (LEEP) pilot. This was the subject of an Australian Evaluation Society (AES) Seminar on 15 May in Melbourne.

What is the Lived Experience Evaluators Project?

The LEEP pilot was co-designed with Asylum Seeker Resource Centre members. It provides a six-month paid internship to selected members, who bring diverse experience and qualifications – from health service delivery to business.

The project has three components.

  • Training: Interns engage in training and development in line with the AES Professional Learning Competency Framework from a range of evaluators who donate their time. This approach means not only that the contribution from each evaluator is manageable, but that interns are able to broaden their professional networks.
  • Mentoring: A range of evaluation professionals volunteer their time to provide individual mentoring.
  • Practical project: Interns work on an evaluation of an Asylum Seeker Resource Centre program that draws on their skills, qualifications and experience.

What is beyond the project?

While the concept of evaluators with lived experience is not new, this project has goals that extend beyond a project mentality. 

As the internships come to a close, the focus is on supporting career pathways into the evaluation sector through site visits to evaluation teams and opportunities to find roles in the sector. The focus here is not just on people with lived experience of seeking asylum having access to paid, stable, fulfilling employment opportunities – which is important – but on the value that people with lived experience can bring to the evaluation sector. They can bring insights that others cannot and ask different kinds of questions.

The Asylum Seeker Resource Centre is currently evaluating the project, with the support of Clear Horizon, and considering the feasibility of scaling, with the support of Social Ventures Australia.

This project shows what is possible when thinking beyond the box as well as about who holds the box in evaluation.

What next for lived experience in evaluation?

If the enthusiasm of the Victorian evaluation sector is anything to go by, the future of elevating lived experience in evaluation looks bright.

We look forward to hearing more from the LEEP pilot and to continuing discussions about lived experience in evaluation at the AES International Conference in Sydney this September. Our teams will be presenting on engaging people with lived experience through two presentations.

  • Beyond co-design to co-evaluation: Reflections on collaborating with consumer researchers: Supporting the emergent literature – and challenging the historical view of consumers as passive potential beneficiaries of research and evaluation process – the active involvement of a consumer researcher in all stages of the evaluation process creates powerful mutual learning. We will discuss how to practically support consumer researchers in evaluation to contribute their lived experience, to further develop their professional skills, and to foster greater ownership of evaluation for the community. We suggest minimising potential power disparities between the evaluation team and the consumer researcher through a mentoring and allyship model. Finally, we will raise important implications for the practice and wider discipline of evaluation.
  • Harnessing the power of co: This presentation provides practical ideas for harnessing the power of co-working with people with lived experience – in different contexts from projects with organisations working with people with autism, dementia, psychosocial disability and intellectual disability, across locations and cultures. We cover the design, data collection, analysis and interpretation, and reporting phases, with options for meaningful engagement when you have years versus weeks or days.

We hope to continue the dialogue with other evaluators about how we can all grow engagement with lived experience evaluators.


[1] See for example: John Lammers & Brenda Happell (2004) Mental health reforms and their impact on consumer and carer participation: A perspective from Victoria, Australia, Issues in Mental Health Nursing, 25:3, 261-276, DOI: 10.1080/01612840490274769; Happell, B., & Scholz, B. (2018). Doing what we can, but knowing our place: Being an ally to promote consumer leadership in mental health. International Journal Of Mental Health Nursing, 27(1), 440–447. https://doi-org.ezp.lib.unimelb.edu.au/10.1111/inm.12404.

[2] Brosnan, L. (2012). Power and Participation: An Examination of the Dynamics of Mental Health Service-User Involvement in Ireland. Studies in Social Justice, 6(1), 45–66. Retrieved from https://ezp.lib.unimelb.edu.au/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=phl&AN=PHL2204526&site=eds-live&scope=site


Part 3: Make it a process not just a product

Aricle Image for Part 3: Make it a process not just a product

May 2019

By Emily Verstege

Have you ever spent all day in the kitchen cooking a meal, only to find yourself not particularly hungry when you sit down to eat? Often, we’re so focused on creating a Masterchef moment at the end, that we miss the enjoyment of shopping for and preparing a meal to be shared with loved ones.

The same can be true for evaluations. It’s easy to fixate on The Final Report as if we were preparing a degustation dinner party. We miss the memo that it’s not just the products of evaluation that create value. Evaluation processes are also enormously valuable: for our clients, their stakeholders and our own practice.

I firmly believe the opportunity for evaluation processes to create value exists in every evaluation, regardless of its nature, size, scope or methodology. Consistent with Michael Quinn Patton’s work on utilisation-focussed evaluation, our intention is always to work towards our evaluation products being used.[1] This is far more likely if we’ve also focussed on using evaluation processes as strategically as we can.

For example, project start-up meetings are a chance for all evaluation stakeholders to be in the same place, at the same time. We recently began a project in regional New South Wales, where our start-up meeting was the first time the commissioning agency had met the leaders of the contracted service provider. Through conversation at the meeting, practitioners and stakeholders deepened their understanding of each other’s strategic objectives and restrictions. Program design logic workshops, and best practice showcases are also a chance for practitioners to network, share information and identify potential collaborative opportunities.

In our experience, the distinction between process and product is especially important when working with commissioned service providers. It’s common for them to be anxious about how their performance will be judged by the evaluation, and they’re conscious of the evaluation’s role in determining future funding decisions. When we shift the focus to using our evaluation processes for learning and improvement, some of those concerns subside.

Some years ago, Jade and I led an evaluation of a suite of early intervention programs for children with autism spectrum disorders. Our evaluation drew on service providers’ data which, until that point, was held in paper copy only. As part of the evaluation, we developed a data entry portal for service providers to use beyond the evaluation. This was a big win for service providers, whose own learning loop was shortened by having access to up-to-date, easy-to-access data. And the advantage for us was having access to a complete, consistent dataset.

Viewing evaluation processes as equally, if not more, important than their products is also an opportunity for our own professional practice and reputation. We are far more likely to deliver a meaningful, nuanced and relevant evaluation product when we have used every possible evaluation process to create connection with our clients and their stakeholders. One of our regular practices is to present our indicative findings at a workshop with our clients. We’ve found that when clients can help us interpret the findings, there’s a stronger sense of ownership and connection with the results. For us, of course, this also reduces the chance our evaluation product won’t meet our clients’ needs.

Here’s our thoughts on opportunities to leverage evaluation processes.

  • Know your audience(s) In our last article in this series, Jade wrote about how to identify your evaluation audience (remembering there’s usually more than one.) Work with your clients to develop a communication map, which outlines each groups’ information needs. We find this helps us keep our end users in mind, and also helps clients realise the potential of the evaluation.
  • Ask questions. Evaluation work is typically won or lost through public tender. By identifying ways your evaluation processes can create additional value beyond the RFQ, you can position yourself as a preferred supplier.
  • Collaborate and co-design. Encourage your clients to move from a ‘done for you’ to a ‘done with you’ evaluation model. If your clients are used to a hands-off approach to evaluation, help them understand why a focus on collaborative processes can improve the evaluation outcomes.

In our next article, Jade will zoom in on evaluation products and how to make them more usable through careful structuring.


  [1] Patton, M.Q. (2013). Utilisation-focused evaluation checklist. Evaluation Checklist Project. The Evaluation Center, University of Michigan. Retrieved October 7, 2016, from https://www.wmich.edu/sites/default/files/attachments/u350/2014/UFE_checklist_2013.pdf


Evaluation and the Common Elements Approach

Aricle Image for Evaluation and the Common Elements Approach

May 2019

By Georgia Marett

With evaluation becoming more integrated into the way projects and programs are undertaken, it can seem that there is a lot of evidence being generated about what specific programs and techniques work. However, this specific evidence is often not combined to understand what works more broadly. The ‘common elements approach’ attempts to rectify this. At a recent lecture by Assistant Professor Bryce McLeod and others run by the Monash University Department of Social Work, I learnt about common elements, how the common elements approach has been used in therapeutic practice, and how it can be applied to issues facing governments and businesses.

What is the common elements approach?

Common elements (also called practice elements) are components or techniques which, on their own, have evidence to suggest that they are effective. The common elements approach attempts to gather these components and apply them to new problems. For instance, if a particular therapeutic technique (e.g. group therapy) has been found to be effective when counselling adolescents with anxiety, it could also be tried with adolescents with depression as part of a common elements approach. The therapeutic technique is the common element, which is combined with other evidence-based components, to create a new way of approaching treatment.

In a non-therapeutic environment, the common elements or ‘building-blocks’ are used to craft a service that may not resemble any program that has been used elsewhere but is entirely underpinned by evidence. There is much literature around how common elements are distinguished, selected and combined. One way of identifying and incorporating common elements is the Distillation and Matching Model (DMM), which allows researchers to empirically accumulate a map of the treatment practices underpinned by evidence, understand the underlying relationships between treatment practices and client or context variables, and form hypotheses as to how the common elements could function together.

Why common elements?

Dr Jessica Hateley-Browne of the Centre for Evidence and Implementation outlined some key reasons why the common elements approach can be beneficial for solving complex problems.

  • Common elements allow us to optimise interventions with what works.
  • Common elements provides a common language where techniques are talked about rather than programs.
  • Common elements enable user-centred design—interventions can be designed in a modular way.
  • This is a non-siloed approach—the same hammer (common element/s) can be used on many nails (issues).
  • Common elements focuses on the useability of the intervention and how to implement it.

Where did it come from?

The common elements approach has its roots in clinical and therapeutic practice (especially psychology) where the focus is shifting from programs to practice elements—individual components of a program that have been validated and proven to be effective. Practitioners are turning away from bundles of treatments billed as programs and towards pinpointing the parts that work.

This is where recent work by Professor McLeod comes in; he has focused on finding the common elements which work when treating adolescent substance abuse. Professor McLeod and his team used empirical distillation—existing family therapy programs were evaluated by listening to sessions in which these techniques were implemented. The outcomes were then coded and specific elements of the therapies were disentangled and evaluated. Several common elements were distinguished and ready to be tested. The next step would be to implement the common elements in a clinical setting and evaluate their effectiveness.

How can it be used in public policy?

Using this approach in public policy can be a little more complicated than in a therapeutic setting, but it is still possible. There is not a lot of research in this area as this method is still relatively young. However, there is increasing interest in the common elements approach in the human services—where the similarities of issues are not necessarily acknowledged, and different solutions are constantly being invented. There is a trial currently underway at multiple human services sites around Victoria, looking at both common elements for practice and common elements for implementation.

How does evaluation fit in?

Evaluation is implicit in the common elements approach—there must be evaluation of some kind in order to figure out whether practices are effective. As the common elements approach filters through the public service, evaluations will form a large part of the evidence base from which elements are distinguished. Theory-based evaluation can support the identification of common elements. In particular, realist approaches have long advocated the need to focus on what it is about programs that makes them work, for whom, in what contexts and why. This requires dissecting an intervention to reveal the underlying mechanisms that generate outcomes, and in doing so distilling down the critical ingredients for success. Mechanisms are not program strategies or activities but the cognitive-affective responses generated by participants’ interactions with one or more program strategies.  Framing ‘common elements’ as mechanisms offers the possibility to extend and sharpen the search for causal levers that might be germane across many types of interventions. Engaging in regular evaluation will mean that new evidence is constantly filtering through and elements can be combined and re-combined to tackle new or existing problems.


A personal journey toward evaluation

Aricle Image for A personal journey toward evaluation

May 2019

By Scott Williams

It was never going to be easy coming back to Sydney after so many years. I was 42, newly separated from a marriage and leaving behind those I loved and a home I thought I would retire in someday. The Riverina region was my home and was a place that held cultural ties to my family for generations. We came from the Ngiyampaa nation, a desert dwelling people who lived where the township of Ivanhoe now stands, not far from Lake Mungo. In the early days, my ancestors were moved off their traditional land, eventually scattering after the Catholic mission days and finding work as shearers and on cattle stations as cleaners and maids. They lived hard lives for very little reward, but change would come for their children, and their children’s children.

It would be me who would experience the prosperity that education would bring and the recognition that past generations had sacrificed and fought so hard for on the battleground of civil liberty. I had a PhD in microbiology/ geology waiting for me in the east and a new life was beginning. When I set off for this brave new world, I could never have imagined the twists and turns my professional journey would take. But a self-funded PhD candidate needed work, and the university was only too happy to provide. I began conducting research into sexual health outcomes for Aboriginal youth in metropolitan Sydney. It was not too long before this expanded into researching peer interview strategies for a whole range of issues, from drug addiction and socioeconomic disadvantage to the power differentiation in social research.

The work was fascinating, and I put the word out to academics that I was searching for opportunities to broaden my experience. Then the day came when an email dropped into my inbox describing an opportunity to join a social policy evaluation consultancy as an Indigenous Intern. I jumped at it and after a successful interview found myself drawn into a space I had never experienced. Evaluation was completely new to me, but ARTD welcomed me with open arms.

The Aboriginal Internship

The internship was a full-time, 12-week, experience over the summer of 2018–19 that exposed me to a range of evaluation projects, methods and outcomes. I was assigned an Aboriginal mentor, Simon Jordan, Director Aboriginal Projects and Partnerships, and I hit the road running contributing to qualitative data analysis immediately. In concert with this, I gained early exposure to the conceptualisation of program logics, evaluation theory, interview transcription, summary writing and internal research methods. There were also opportunities to explore data visualisation, a subject I have grown quite fond of.

A highlight of those first weeks was a visit to a prior ARTD client, The Aboriginal Health Unit within St Vincent’s Hospital. It was a rewarding experience to see how ARTD’s evaluation work had real world effects on the health outcomes of Aboriginal people and the broader hospital environment. Conversations arose around the services Aboriginal people from country areas would now be able to benefit from. My thoughts drifted back to my younger years in the mid-1980s when I visited various rural Aboriginal communities with my mother, who was an Aboriginal Drug and Alcohol worker at the time, and seeing the destitute living conditions in corrugated iron shanty towns along river banks and impoverished communities on the fringes of townships. It was a scene of disfunction, disempowerment and loss of hope.    

It was impressive to observe ARTD’s positive and supportive relationships with various Aboriginal units and organisations. ARTD staff, with the guidance and advice of Aboriginal members, understood the importance of cultural and social protocols in forming mutual trust when working with Aboriginal communities. This reinforced my belief in the great potential value that evaluation holds for clients and the people they provide services to. During my lifetime, I have been witness to significant improvements in the living standards within some Aboriginal communities and their inclusion in the decision making processes that affect them.

Monthly meetings with Simon and senior management kept me informed on my progress and stimulated my growth in knowledge and experience. I found myself filling with confidence, which enabled me to take on tasks with more speed and accuracy and better time management for deadlines. The internship offered me the flexibility to work in areas I found interesting and the space for personal development in order to hone skills that I could build upon. There were also opportunities to participate in learning workshops, seminars, staff meetings and face-to-face client discussions.

By the end of the internship, I had amassed a variety of new skills and gained a more rounded understanding of public policy evaluation and social issues and outcomes. All in all, I contributed to more than 10 evaluation projects. But more than this, I had the great pleasure to interact with supportive, professional and dedicated teams within ARTD. What impressed me so much was their knowledge in a wide variety of policy areas and their nuanced understanding of social issues. The focus on getting the work right and doing the best for the client, and the communities and individuals they serve, is always at the forefront of what ARTD does. They are also driven to advance learning, enhance flexibility and embrace creativity in the workplace. 

Where to now?

I’ve come a long way since starting the internship. I no longer aspire to make breakthroughs in the disciplines of microbiology and geology, though they will always be of great interest to me on a personal level. I have changed focus and re-aligned. Now I have the chance to be part of an effort to improve the outcomes of social programs and policies and, ultimately, improve the lives of some of the most vulnerable people in communities across the country. There is the opportunity to be part of the ongoing effort to improve the lives of Aboriginal and Torres Strait Islander people, and to carry on the work of those who pursued change in their time. You could even say I now have a rejuvenated sense of purpose, a reinvigorated passion for creativity and a deep appreciation for how ARTD approach the evaluations they do.

It was with great delight and enthusiasm that I accepted an offer to stay on with ARTD after the internship, feeling prepared for the work ahead at ARTD and my career more broadly. As an Aboriginal person I feel I have a valued and important perspective to provide to the work we do.

I would thoroughly recommend the Aboriginal Internship at ARTD to any aspiring Aboriginal/ Torres Strait Islander person who would like to learn more about public policy, understand evaluation processes and outcomes and grow within a supportive workplace.

I did it, and the future is very bright!  


Part 2: Know your evaluation audience/s

Aricle Image for Part 2: Know your evaluation audience/s

April 2019

By Jade Maloney

For evaluation to live up to its potential to improve outcomes and ensure effective targeting of limited resources, evaluation communications must cut through.

The first step is to know your audience or audiences (as you will often have more than one). Knowing audience will inform the engagement channels you use, how you frame your message and the language you use, among other things.

Get to the heart of it

There’s a large base of literature on the demand and supply side factors that affect evaluation use – that is, the decision setting and user characteristics on the one hand and evaluators’ approach on the other. (For more, see my research on evaluation use in the Australian context). The demand-side factors affecting evaluation use include the characteristics of evaluation users, their commitment or receptiveness to evaluation, and the broad characteristics of the context in which the evaluation is being conducted, particularly the political climate, social climate, language and culture, values and interests, as well as the interactions between these.[1]

While you can’t do much about the political climate, you can get to know your evaluation audiences. This will help you to deliver content that is of value to them, in ways that they can take on board. It can also help you to overcome lack of commitment to or fear of evaluation.

Get into their shoes

Think like a market researcher. For each of your audiences, ask yourself these questions:

  • What is it that keeps them awake at night? What is most important to them and how is the evaluation helping with this?
  • How do they best learn new information? Are they visual, auditory or kinaesthetic learners?
  • What evidence will they find credible? Will they only value the statistics, or will consumer stories be the language that speaks to them?
  • How much detail do they need? Do they need an audit trail, the headlines or something in between?
  • How often do they need information? Do they need regular updates to inform their practice and decision making?
  • What else is going on for them? Could this evaluation affect their job or the way they receive services? Is it high on their agenda or one priority among many?

Work out what this means for how you communicate

The questions above will give you a working profile of each of your audiences. You might like to keep track of this information in a table or matrix.

By now you may be asking yourself, ‘How could I possibly meet the needs of all of these audiences in one report?’ The answer might be that you can’t. You might need different communications tools for different audiences, such as video summaries, findings brochures, as well as slide decks and full reports.

But, in some cases, you will be able to meet the needs of various audiences through layering information in a single communication. At it’s most basic, this means starting with a one-pager of key findings, followed, by an executive summary, and a full report, with technical detail relegated to appendices. At the next level, it means using different modes of communication, recognising that the stats tables will reach some, while the stories will reach others. You can also layer in face-to-face findings discussions by providing visuals and written documents to complement your auditory communication, and using activities to enable people to engage with the implications. Stay tuned for more on structuring reports and telling evaluation stories.

The next article in our Communication for Evaluation series will cover evaluation as process rather than product.


[1] Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56, 331–364; Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B. (2006). Research on evaluation use: a review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410; Vo, A.T. & Christie, C.A. (2015). Advancing research on evaluation through the study of context. In P.R. Brandon (Ed.), Research on Evaluation. New Directions for Evaluation, 148, 43–55.


Applied ethics in program evaluation

Aricle Image for Applied ethics in program evaluation

April 2019

By Lia Oliver

As evaluators, ethics is key when considering if, when and how and we will conduct our projects. Ethical decisions shape the validity and reliability of the evaluations produced. We also know that clients of government social policies and programs are often vulnerable community members, and so its paramount that evaluators of these programs engage sensitively and ethically.

Our recent internal training, led by Brad Astbury, was a chance for us to refresh our understanding of ethics and flex our ethical reflexes through some challenging case studies.

The session reinforced why applied ethics in program evaluation is so important.

Evaluation is the most ethically challenging of the approaches to research inquiry because it is the most likely to involve hidden agendae, vendettas, and serious professional and personal consequences to individuals. Because of this feature, evaluators need to exercise extraordinary circumspection before engaging in an evaluation study. (Mabry, 1997, p.1)

Ethics is about more than HREC processes

In some evaluations, we must seek formal approval from a Human Research Ethics Committee (HREC), and this is often what evaluators think of when they think of ethics. HREC applications address ethical concerns, such as informed consent, voluntary participation, confidentiality, privacy, anonymity and benefits and risks of the research.

While these are all crucial, a successful ethics application does not guarantee an ethical evaluation. Evaluators need to consider ethics beyond the procedural aspects of an ethics application. Ethical awareness should be embedded throughout the duration of a project from planning processes and partnerships that recognise and explicitly address the need to be respectful to sharing evaluation findings with program stakeholders. This ensures the evaluation is honest, transparent, and credible.

Be wary of ethical fallacies

House (1993) outlined the ethical fallacies that frequently lead to ethical mistakes in professional evaluation:

  • clientism: doing whatever the client wants
  • managerialism: seeing program managers as the sole beneficiary
  • methodologism: believing that adherence to good methods is synonymous being ethical
  • relativism: accepting everyone’s opinion equally
  • pluralism/ elitism: giving a priority voice to the more powerful.

These ethical fallacies relate to the need to capture diverse stakeholders in evaluation so that we can understand the nature and role of different value perspectives. Some evaluation sponsors have powerful vested interests and it is important that evaluators can identify these to ensure they do not bias the findings. Often evaluators will need to balance the needs and voices of the client or program funder against the program beneficiaries to reach fair, accurate, representative and valid findings (House, 2008).

Normative ethical frameworks

Three main normative ethical viewpoints exist (Newman & Brown, 1996). These all relate to principles of autonomy, nonmaleficence, beneficence, justice and fidelity and often form the basis of codes of conducts that are used to identify appropriate ethical behaviour.

  • Consequentialist: drawing on utilitarianism to achieve the maximum happiness and the greatest good for the greatest number.
  • Deontological: Focusing on the morality of an action and whether that action itself is right or wrong, based on explicit rules or duties, rather than looking only at the consequences of the action
  • Virtue based: focusing on altruism.

Ethics in practice

For evaluators, ethical dilemmas can occur frequently across projects and sectors. Often, evaluators solve ethical dilemmas by referring to their intuition, talking to peers, as well as reviewing and drawing on past experiences.  Evaluators can also draw on relevant ethical guidelines such as the American Evaluation Association Guiding Principles for Evaluators (American Evaluation Association, 2018), the Program Evaluation Standards (Yarbrough et al., 2011) and the AES Code of Ethics (Australian Evaluation Society, 2013). Other useful resources include ethical decision-making flowcharts where you can map your ethical decision making to a particular action (Newman & Brown, 1996).

In our workshop, we discussed three different case studies in small groups. We identified the ethical concerns, developed a response each and discussed how the ethical concern would impact the evaluation. While each group had the same evaluation codes of conduct as a reference point, a range of solutions were developed.

We found that ethical dilemmas can be interpreted differently through the lenses of the different codes of conduct, ethical viewpoints and evaluators’ own experiences. When trying to resolve ethical dilemmas, there is never just one simple answer. In saying this, we cannot shy away from deep engagement and reflection on ethical practice in evaluation as this ensures our practice benefits and doesn’t harm participants, and findings are useful and credible. 

Reference List

American Evaluation Association (2018). AEA - American Evaluation Association : Guiding Principles for Evaluators. [online] Eval.org. Available at: https://www.eval.org/p/cm/ld/fid=51 [Accessed 15 Apr. 2019].

Australian Evaluation Society (2013). Code of Ethics. [online] Australian Evaluation Society. Available at: https://www.aes.asn.au/images/stories/files/membership/AES_Code_of_Ethics_web.pdf [Accessed 15 Apr. 2019].

House, E. (2013). Professional evaluation: social impact and political consequences. Sage Publishing.

House, E. (2008). Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation, 29(4), pp.416-426.

Mabry, L. (1997). Evaluation and the postmodern dilemma. United States: Emerald Publishing Limited Jai Press.

Newman, D. L., & Brown, R. D. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.

Yarbrough, D., Shulha, L., Hopson, R. and Caruthers, F. (2011). The program evaluation standards: A guide for evaluators and evaluation users. 3rd ed. Thousand Oaks, Calif.: SAGE.


Part 1: The communications challenge: how can evaluation cut through?

Aricle Image for Part 1: The communications challenge: how can evaluation cut through?

April 2019

By Jade Maloney, Emily Verstege and Ruby Leahy-Gatfield

Evaluation is about assessing the ‘merit, worth or value’ of something (in our context, this is usually a policy, an intervention or program) to inform decision-making. The significant amount of expenditure on evaluation, particularly by government, has been justified by the potential for use of evaluations to improve outcomes and ensure effective targeting of limited resources. But evaluation reports often go un-used, gathering dust on shelves.

A range of factors contribute to this (see Michael Quinn Patton’s Utilisation-Focused Evaluation and Jade’s research on evaluation use in the Australian context). Poor communication is one of these factors.

The Information Age

This is the age of information overload and evaluation is unlikely at the top of your audience’s to do list.

  • High volume: We’re overwhelmed by our email and social media accounts, big data and the wealth of references at our fingertips. There’s every chance your signal will be lost, if it doesn’t cut through all this noise.
  • Low memory: The human brain can only process a small amount of new information at once. The average person can only hold four chunks of information in their brain at a time. Is your chunk interesting enough to take the place of what your audience is having for lunch?
  • Jargon over meaning: It’s too easy to say things like “We used a theory-based approach to assess an innovative intervention to identify best practice components to leverage for future scalability.” But no one will know what you mean. (And do you?!)

The key communication issues

Evaluation reports can fail on communications in many ways. 

  • Complexity: Too much information is overwhelming. It makes it hard for your audience to know what’s important and even harder to remember it.
  • Misunderstanding: Too little information leaves your findings open to misinterpretation. Jargon contributes to this.
  • Boredom: Overly complicated and difficult to understand information makes it too easy for your audience to check out. A report needs to engage to have a chance of competing with what else is going on them.

When a report does not tell a story that engages, when key questions are left unanswered, and when the jargon overtakes the words, reading an evaluation report becomes a chore rather than a gift.

Our clear communications journey

Jade studied journalism and began her career in publishing before joining ARTD in 2008 and completed her Master of Arts in Creative Writing in 2013. Emily has worked as science writer and editor and—as her school report cards note—has always been a competent verbal communicator. Ruby worked in the communications team at International IDEA in Stockholm and has been known to stick grammar jokes up on the office walls.

Jade and Emily have been thinking about communication in evaluation for years. They first presented on How Effective Communication Can Help Evaluations Have More Influence at the Australian Evaluation Society International Evaluation Conference in Sydney in 2011. They’re both relieved and disappointed that much of their advice remains relevant.

Our favourite books about communication include Gabrielle Dolan’s Stories for Work, Chip and Dan Heath’s Ideas That Stick, Neil James’s Writing at Work, and Mark Tredinnick’s The Little Red Writing Book.

Our collective frustration is the wilful misuse of the possessive apostrophe (anyone for scrambled egg’s?) and evaluations that don’t reach their potential because they get stuck behind communication barriers.

The tips to come

In this seven-part Communication for Evaluation series, we’ll step you through our top tips for evaluation communications that cut through. We’ll cover ways to:

  • Know your audience
  • Make it a process not just a product
  • Structure it
  • Tell stories
  • Cut the clutter (use plain English)
  • Hack yourself to write better.

Amplifying social impact

Aricle Image for Amplifying social impact

April 2019

Jack Cassidy and Ken Fullerton

How can organisations improve their evidence-based decision making and, ultimately, their social impact? This was the key focus question in the latest AES professional learning seminar held in Sydney on 21 March.

Lena Etuk of the Centre for Social Impact (CSI) – a collaboration between UNSW, The University of Western Australia and Swinburne University of Technology – presented one approach to answering this question, describing CSI’s Amplify Social Impact project and how they aim to help organisations measure their social impact achievements.  

A major focus of the Amplify project is on connecting with and working with organisations working to address social issues in the areas of Housing Affordability and Homelessness, Education, Financial Wellbeing, Social Inclusion and Work. The project has three aspects: developing an evidence base and research agenda for key social issues, engaging and connecting stakeholders to design and implement innovative solutions, and developing an online platform for understanding social problems, measuring social impact and reporting and benchmarking social outcomes.

Lena’s presentation stimulated a lot of debate in the audience.

Outcome measurement

Lena, while acknowledging the significant amount of work and investment going into social purpose programs and initiatives in Australia, identified two related concerns with outcome measurement in the sector.

  1. Organisations aiming to have a social impact often use a wide variety of tools to measure their outcomes.
  2. A large proportion of the tools being used to measure social impact, including population and household level surveys have not been validated.

This means for those of us working in the social purposes sector, we are often unable to compare outcomes across similar services and programs and unable to truly know that we are measuring the outcome we say we are measuring.

CSI have collated a range of instruments and developed a register (called Indicator Engine) of over 700 indicators tailored for use by government, non-government organisations and social entrepreneurs.

Benchmarking

CSI’s intention is to develop a platform (called Yardstick) in which organisations striving to contribute to the same social outcomes can benchmark their performance, learn from one another and potentially replicate elements of successful projects.

The platform will be open to, and is designed to be of benefit to, organisations of all types and sizes, governments, existing networks, academics, social entrepreneurs and other stakeholders working to address social issues.

Anticipated challenges

While the intent is noble, it is likely that organisations and evaluators who choose to use Amplify’s Indicator Engine and Yardstick platform will encounter challenges.

  • What if the indicators don’t meet the needs of the project managers to understand how their intervention is working?
  • How will the identified instruments be further tested with Australian populations, including Indigenous populations?
  • What certainty will users of the platform have about data quality?
  • Is benchmarking the right term? Is the intention to define a certain standard?
  • What other information do you need to understand the drivers of differential outcomes so that benchmarking can serve its purpose of enabling organisations to make improvements?

We believe benchmarking could bring new opportunities and challenges to the sector.

Imagine a hypothetical scenario involving a youth job creation program in a large urban area with a similar program rolled out in a regional/ remote community. An important outcome of both programs is enhanced job satisfaction, so they use the same indicator but achieve different results. From an evaluator’s perspective, the Amplify platform may be useful as it could enable helpful comparisons of data, but it might also fail to collect (accurate) data needed to sufficiently understand contextual drivers of difference.

It's possible that other actors could use Amplify for different purposes, such as performance monitoring. Having accurate, and validated, data could also be important to funders, who have limited resources but need to make decisions around which program(s) to fund or not fund going forward. The possibility of funders using Amplify to monitor an organisation’s performance may deter some organisations from participating in the process as they may fear that their performance (according to the data) is not sufficient enough to attract additional/ new funding. We feel that benchmarking needs to be done carefully to maintain a collaborative spirit between organisations. It shouldn’t be about who’s outperforming who.

What next?

Despite these issues, there are reasons for evaluators to be excited about the Amplify Social Impact project and CSI’s work.

So far, they have only secured around 50% of their funding target  – securing the outstanding funding will enhance their ability to work closely with organisations designing and implementing social purpose programs. In turn, this could create new opportunities for strengthening outcomes measurement.

The Amplify project is likely to generate further discussion around indicators, benchmarking and addressing social issues. This is positive as it could lead to individuals, governments and organisations becoming more aware of these issues, sharing learnings and increasing opportunities for collaboration and replication.   

A cynic’s response

ARTD Partner, Andrew Hawkins, believes this endeavour should be entered into with great caution and humility in what can be achieved.

Reliable and valid measures are great, but they don’t ensure reliable and valid measure of outcomes. Most psychometric scales are designed to measure a current state, self-esteem, coping skills etc. These scales are not usually designed to be sensitive to changes as a result of a program, especially those operating in a complex adaptive system (i.e. society) where changes may be non-linear, and for which standardised measures may not be sensitive.

More fundamentally, the method by which measures are taken and the completeness of the data is a huge potential source of error that may well dwarf the benefit of similar measurement tools. What if one organisation is handing out the measurement tool at the end of a session while the program staff are present while some mail it out and get low response rates?  What if one has a control group with random allocation to deal with attribution error and another does not? There are innumerable ways that error can seep into pollute so called ‘equivalent’ measures of outcomes. And then, if we follow Rossi’s law – that the more rigorous the measurement the more likely that the result is ‘no effect’ – we could end up funding organisations with less rigorous measurement and programs with easy to ‘measure’ outcomes or relatively ‘easy to change’ cohorts.  

While there is merit in providing and using similar reliable and valid measures across interventions, it would be dangerous to create ‘league tables’ of organisations and programs. It would be wholly unacceptable to allow it to happen by publishing data when the actual measures are not taken in a standardised and systematic manner and shown to be sensitive to the changes an intervention is making.

…………….

We look forward to attending the AES Professional Learning session on ‘Harnessing the promise and avoiding the pitfalls of machine assisted qualitative analysis’ on 2 May presented by ARTD’s Jasper Odgers and David Wakelin and our partner at Altometer BI


Safer Pathway evaluation released

Aricle Image for Safer Pathway evaluation released

April 2019

By Ruby Leahy Gatfield and Fiona Christian

ARTD’s evaluation of Safer Pathway found the program is delivering a consistent, effective and timely response to victims across NSW.

Safer Pathway is a key initiative under the NSW Domestic and Family Violence Blueprint for Reform 2016–2021, led by Women NSW and delivered by the Department of Justice. Rather than an individual service or program, Safer Pathway provides state-wide service system infrastructure designed to ensure all victims of domestic and family violence in NSW receive a timely, effective and consistent response, regardless of where they live. It offers victims a tailored, coordinated service based on their needs and the level of threat to their safety. 

Women NSW engaged ARTD to conduct an independent evaluation of Safer Pathway over 2017-18. 

What did we find?

Using a realist-informed, mixed-method approach, we found that the initiative has been implemented largely as intended and is generally meeting its intended objectives. As a result of Safer Pathway:

  • victim’s safety is being routinely assessed by NSW Police and victims at serious threat are being prioritised throughout the Safer Pathway service response
  • a single, streamlined referral pathway has replaced the previous service fragmentation and duplication, helping victims to access the support they need and facilitating information sharing between service providers to prevent threats to a person’s life, health or safety
  • there is now a standard level of service for victims across NSW, with victims at high risk now receiving a more consistent, coordinated response across NSW and across service providers.

We also identified 23 recommendations to improve the service model and delivery. These related to:

  • expanding referral pathways from other agencies and services
  • continuing to provide and strengthen state-wide training
  • revising the current Domestic Violence Safety Assessment Tool to enhance its predictive ability
  • investigating strategies to engage hard-to-reach groups and address service gaps
  • developing and implementing more systematic data collection for monitoring and evaluation.

Download the full report here: https://www.women.nsw.gov.au/download?file=650328

What’s next?

Women NSW and partner agencies are using the evidence and recommendations in the report to strengthen the service response to victims of DFV in NSW, documented in their response to the report.

Hayley Foster, NSW Director of the Women's Domestic Violence Court Advocacy Services (a key Safer Pathway service provider) is calling on all political parties to “pay attention to these important evaluation findings and what they tell us about what is working and what is needed.”


Reducing our waste with Bin Trim

Aricle Image for Reducing our waste with Bin Trim

March 2019

By Ken Fullerton

Waste reduction is an important part of our commitment to be an environmentally sustainable business. We believe that it is not only good for people and the planet, but also saves costs and highlights our intention to act rather than just talk about sustainability issues.

According to the NSW Environment Protection Authority (EPA), about 70% of items placed in a general waste bin can be reused or recycled instead of being sent straight to landfill. Each year, NSW businesses send more than 1.8 million tonnes of waste to landfill. A 70% reduction would prevent 1.3 million tonnes of waste ending up in landfill.

We have had a long-term commitment to become more environmentally sustainable. To help us identify opportunities to reduce our waste, enhance our existing recycling systems and further educate our staff, we recently underwent a free Bin Trim waste and recycling assessment.

What is Bin Trim?

Developed by the EPA’s Business Recycling Unit under the Waste Less, Recycle More initiative, Bin Trim aims to ‘help businesses take action on waste.’ We first learnt of the program when we conducted an outcomes evaluation of the first phase of the Waste Less, Recycle More initiative for the EPA from early to mid-2018.

It is the largest waste and recycling funding program in Australia. By undergoing an assessment, we joined over 22,000 businesses across NSW who have committed to helping to protect the environment by promoting increased recycling.

In 2017, the NSW Government announced an extension to the Waste Less, Recycle More initiative and committed an additional $337 million over four years from 2017–2021.

What did the assessment involve?

Our assessment involved an EPA approved assessor from Cool Planet visiting our Sydney office.

In our initial meeting, the assessor helped us better understand our existing waste and recycling processes, the different types and quantities of waste we generate as a business and different recycling options available in our area. He suggested we set up two new dedicated recycling bins to separate our recycling materials – one for paper and cardboard waste and one for products such as tins and plastic containers – and provided these bins (free of charge).

The new bins complemented our existing dedicated bin for soft plastics. Our soft plastics waste – such as plastic bags and packet wrapping – is later transferred into a REDcycle drop-off bin. It is used to produce a range of products including decking, furniture, signage and even new roads.

Following the visit, our assessor prepared a tailored action plan and registered us on the EPA’s online Bin Trim App. This enables our recycling methods and achievements to be reported and for us to become part of an online community of businesses committed to increasing recycling across NSW. It’s a way to learn about other innovative recycling approaches businesses may be using.

In his second office visit, our assessor explained the action plan. This included putting up dedicated signs beside our existing and new bins to make it clear exactly what can and cannot be recycled as well as the specific bin waste items.

What was the impact?

We are actively using our new bins to separate and recycle waste items and have observed less waste being disposed of in our general waste bin. We have also updated our contract with our cleaning company to ensure that our office waste is put into the correct recycling channels once it leaves our office.

Our staff have commented that the Bin Trim signage has been helpful and there have been more informal office conversations about what we should do with our waste and where it ultimately ends up. We are always on the lookout for new ways to become more sustainable – from recycling office materials and using energy efficient lightbulbs, to purchasing eco-friendly products and tending to our worm farm, already much loved by our staff!

The experience was an educative one and we would certainly encourage other businesses across NSW to take advantage of the EPA’s Bin Trim program by signing up to receive their own free assessment. Reducing waste and increasing recycling requires collective effort at the individual, organisational and structural levels. Through businesses making small improvements we can make substantial positive impacts on the environment.


How can evaluation support (not thwart) community development?

Aricle Image for How can evaluation support (not thwart) community development?

March 2019

By Jade Maloney and Ruby Leahy Gatfield

As evaluators, it’s tough to hear the criticisms levelled at evaluation by community development theorists and practitioners. But, listening to their their perspective, we can see how certain approaches to evaluation – coupled with certain expectations from funders – could thwart rather than support community development initiatives.

What’s the problem?

Traditional formative and summative evaluation don’t align with the iterative and emergent nature of community development. At the start of a community development initiative, it is not clear what it will look like, how it will be delivered, or even what outcomes it will aim to achieve. Initiatives continue to evolve to reflect community needs – they don’t become settled models with pre-determined outcomes.

Community development practitioners question how evaluations will be used in funding decisions, the usefulness of evaluation for their purposes, the resources involved, and whether evaluation is up to the task of capturing the value of their work.

What evaluation approaches might better serve community development?

There are more ways to approach evaluation than we can count. If evaluators are to truly support not thwart community development, we need to understand the context – the complex adaptive systems – into which interventions are developing and align our practice with the philosophy of community development by being organic, responding to community processes and sharing ownership.

In our experience evaluating community development, the following approaches are more appropriate and particularly useful.

  • Developmental evaluation (Patton, 2011): this approach recognises the iterative and emergent nature of community development and provides a framework for systematically supporting evidence-informed decision-making about the ongoing development of an initiative. Accountability is centred with those delivering the initiative and tied to their values.
  • Empowerment evaluation (Fetterman, Kaftarian & Wandersman, 2015): this approach was designed for community organisations, and gives them ownership of the evaluation, with the support of an evaluator as a ‘coach’. The principles align well with those of community development.
  • Principles focused evaluation (Patton, 2018): this approach enables an assessment of principles – whether they can guide the work, and whether they are useful, inspirational, developmental and evaluable. This is useful for community development, which is generally guided by principals not set program models.

Where can I read more?

Check out our article in a special edition of the Social Work and Policy Studies: Social Justice, Practice and Theory journal for more detail and a case study of how we’ve applied these approaches to a national community development initiative.

There is also a lot of other great content in the special edition, which comes at an important time for community development. As Howard and Rawsthorne identify in their editorial, we need to take care that as we benefit from the shift toward individual choice and control, we do not lose the value of collective ideas and actions. In their article, Hirsch et al. outline the implications of the changing face of disability and refugee services and Mapedzahama describes the significance of race in community development in Australia. These articles identify important considerations for those working with community in Australia, including evaluators.

You can also read our tips for monitoring and evaluation community development from the Ability Links NSW Community Development Resource Package, which was developed to think about inclusion of people with disability in community development. And our blog about our use of developmental evaluation with Dementia Australia.


How can evaluators #BalanceforBetter?

Aricle Image for How can evaluators #BalanceforBetter?

March 2019

By Jade Maloney and Ruby Leahy Gatfield

This year's International Women's Day challenged us to think about what we can do to #BalanceforBetter. As our staff celebrated today, we started to think about what we can do as evaluators to support a more balanced world.

Have you heard of feminist evaluation? It draws on feminist theory and isn’t so much an approach as a way of thinking about evaluation. Feminist evaluators may draw on participatory, empowerment and democratic approaches to evaluation – valuing diverse voices. Feminist evaluation recognises that evaluation is inherently political – it is not only stakeholders who bring particular perspectives to an evaluation, but evaluators too – and encourages evaluators to use evidence to advocate for changes that addresses gender inequities. Want to find out more? See this Better Evaluation blog on Feminist Evaluation.

Better Evaluation also has a page on Gender Analysis, which explains the difference between definitions of gender as category versus gender as process of judgement and value related to stereotypes of femininity and masculinity, and suggests steps for gender analysis in evaluation.

UN Women also have resources: Inclusive Systemic Evaluation for Gender equality, Environments and Marginalized voices (ISE4GEMs): A new approach for the SDG era, which provides theory and pratical guidance, and an Evaluation Handbook: How to manage gender-responsive evaluation, which provides guidance on gender-responsive evaluation in the context of UN Women, with links to a range of tools.

As evaluators, we’re often encouraged to think about how we can evaluate programs at both a state and agency level with reference to the United Nation’s Sustainable Development Goals (SDGs). Goal 5 recognises that Gender Equality ‘is not only a fundamental human right, but a necessary foundation for a peaceful, prosperous and sustainable world.’ We caqn bring the targets for this Goal – such as ending all forms of discrimination against all women and girls everywhere – into our evaluations to measure progress.

Regardless of whether we practice feminist evaluation or consider SDG Gender Equality targets, we should always be aware of the power dynamics in evaluation. At the Australian Evaluation Society (AES) Conference in Launceston last year, Tracy McDiarmid (International Women's Development Agency), Amanda Scothern (International Women's Development Agency), Paulina Belo (Alola Foundation) had us play out gendered power dynamics and how these could be disrupted using performative methods.

With the sub-theme ‘Who should hold the box? – Questioning power, exploring diversity’ at this year’s Australian Evaluation Society (AES) Conference in Sydney, we hope to see others bringing creative techniques to enable evaluators to address inequities and #BalanceforBetter.

 


3 things I learnt as an Aboriginal intern

Aricle Image for 3 things I learnt as an Aboriginal intern

March 2019

By Research Assistant, Holly Kovac

I am a proud Booroobergonal woman of the Darug nation. My mother, too, is a Booroobergonal woman and my father is Croatian.

A little over a year ago, at 19 years old, I started a 6-month internship at ARTD Consultants. During this time, I was exposed to the world of evaluation and public policy and what it really means to be a part of a professional working environment. ARTD has not only set me up with professional skills, but skills I can carry with me for the rest of my life.  

Here’s what I learnt in my time as an intern.

1. How to be confident in my culture and background

As a young Aboriginal woman, I always had some connections to my family and culture. However, it wasn’t until I started university that my passion and drive to build my connections and give back to community really kicked in.

Working at ARTD has helped me with this. It has allowed me to meet people from all over Australia and has taken me outside of the bubble of where I grew up. Being able to work with Aboriginal people and communities, listening to their stories and where they truly come from, has shown me the richness and value of what it means to be Aboriginal and given me an even deeper sense of pride.

I was also fortunate to have an amazing Aboriginal mentor, Simon Jordan, who led me through a journey of discovering my identity. He helped me realise that no matter how connected or disconnected you are from our culture, there is always room for you to make your own path and reconnect. Embracing culture and being confident in who I am now gives me the momentum to break the cycle and start giving back to community and country.

2. How to be organic and think on the spot

Early on in my internship, I had the opportunity to go out on field work to collect qualitative data for ARTD’s evaluation of World Vision Australia’s Young Mob Leadership Program.

I learnt very quickly that when out on field work, you never know what challenges you are going to face ­– from facilitating on the spot to connecting with boisterous students. At one of the first workshops I went to, I was thrown in the deep end when given an on-the-spot chance to help facilitate. I took ten minutes to expel my nervous energy and then it was show time. By the end of the workshop, I was happily leading the reflective discussions.   

This experience really boosted my confidence and shone a light on my natural engagement skills. I’ve since helped facilitate focus groups and conducted interviews with students in the program. I’ve found that throwing yourself in the deep end of challenging situations, is probably the best way for me to learn and gain the confidence to field any challenging questions that are thrown at you.

3. How to use work as a place to channel your energy

Life is full of distractions, from university stressors to managing family and relationships. The internship taught me that work can be a great place to focus your energy into something you’re passionate about.

ARTD has been a place for me to grow up – helping to bring balance into my life and fostering a great sense of accomplishment and pride. Excuse the cliché but working here really has taught me that change and adversity can make you stronger and will always teach you valuable life lessons.

Coming into ARTD changed my life for the better – they have supported me through thick and thin over this past year and I am so grateful to be in such a unique and understanding working environment.

What’s next in store?

Following my internship, I came on board at ARTD as a Research Assistant. While I’m still finishing my nursing degree, I’m keen to continue working in the public policy space – learning more about the Indigenous and health policy sectors and evaluation more broadly.  

I’m also supporting the development of ARTD’s Reconciliation Action Plan and excited about having a voice in future projects.

PHOTO CREDITCameliaTWU on Flickr. 


The many ways we can un-box evaluation

Aricle Image for The many ways we can un-box evaluation

February 2019

By Partner, Jade Maloney

To outsiders, evaluation can seem like a mystery and something to fear. While evaluators see its potential to inform better public policy, the evidence says evaluation does not always live up to it’s potential.

This is part of why our theme for this year’s Australian Evaluation Society Conference in Sydney is ‘Evaluation Un-boxed’.

I shared my initial thoughts about why and how we should un-box evaluation in the first of a series of blogs for Better Evaluation. I mentioned the need to open up evaluation to end users and to be open to what we can learn from community, thinking about the basis on which we value and how we engage with lived experience. I also questioned how the idea of un-boxing evaluation fits with conversations we are having about pathways to advance professionalisation within the context of the Australasian Evaluation Society.

Comments from other evaluators reminded me that un-boxing evaluation is also about un-boxing evaluation reporting. How could I have neglected one of my favourite topics?

Then I got to continue the conversation with Carolyn Camman and Brian Hoessler on their podcast, Eval Café. We found many more ways we can un-box evaluation – by translating the jargon, making it meaningful, getting it integrated into practice, and bridging professions. All of this raised big questions about who we are and how we go about our work.

How do we un-box reporting?

Much has been said about evaluation reports gathering dust on shelves. If we want evaluations to be used in a world of competing demand and information overload, do we need to rethink evaluation reporting? The short answer: yes. How? Let me count the ways. At the 2018 Australasian Evaluation Society the conference in Launceston, evaluators came up with a range of ideas for evolving the evaluation deliverable in a session facilitated by my colleague, Gerard Atkinson.  

But un-boxing reporting is about more than finding new and engaging formats. It’s about finding ways to enable shared learnings across evaluations of similar initiatives, so that evaluation contributes to the broader knowledge base.

And, for me, it’s also about focusing on the process as much as the product. If you’re engaged and learning on the journey, the report at the end becomes less important. As I learned in my creative writing degree – you can’t control what people do with your writing once you let it out into the world. But if you’re having the right conversations along the way, the product will more likely be based on shared understandings, and useful and used.

Do we all need to call ourselves evaluators to un-box evaluation?

Like most people, I fell into evaluation from somewhere else – communications and creative writing. Not all of us are solely evaluators. And some of us don’t call ourselves evaluators in general conversation.

Carolyn asked if this is a problem? My first answer was yes – even though I’m one of those people that doesn’t call myself an evaluator – because if we’re not out there promoting what we do, how will people know the value of evaluation in a world where co-design, behavioural insights, implementation science, social impact measurement, and customer experience are on the rise?

But then Carolyn questioned whether this is placing too much emphasis on the “evaluator” instead of “evaluation”. Perhaps.

If the end game is integrating evaluation into practice, not everyone involved in the ‘doing’ of an evaluation would be an evaluator. They would just need to think evaluatively. But what does this mean for professionalisation?

Bridging or un-boxing?

Professionalisation could help with coherence and communication – enabling outsiders to better understand what this thing we call evaluation is. However, as Patricia Rogers and Greet Peersman found diverse competencies are required for different types of evaluation. Pinning us down is not so easy.

As evaluators, we need to draw on myriad skills – from facilitation to statistical analysis. Paul Kishchuk has suggested we look at evaluation as a bridging profession. Brian tied this back to bonding and bridging ties. What professions do we draw from as well as inform? 

Continue the conversation

There’s a lot more to un-boxing evaluation and we’re keen to continue the conversation.

Tune into the Eval Café podcast.

Submit a presentation proposal by March 7 and join us in Sydney in September.

Join the conversation on the Better Evaluation blog.


TFM: working with, not for

Aricle Image for TFM: working with, not for

February 2019

Ruby Leahy Gatfield and Sue Leahy

The 2019 Their Futures Matter Conference, held in Sydney on 11 February, was an important reminder of the need to work with, not for, communities, when building evidence of outcomes.

Their Futures Matter (TFM), a landmark, whole-of-government reform designed to improve outcomes for vulnerable children and families in NSW, is committed to using evidence and evaluation to inform service design, outcomes measurement and investment. As put by TFM Executive Director, Gary Groves, ‘gone are the days that government funds something that sounds nice, without real rigour around outcomes. It’s nice to have 100 referrals walk in the door, but now I really want to know the outcomes for those referrals.’ The reform promises that evidence, monitoring and evaluation will drive continuous improvement across all areas of the system response and service delivery.

While we’re excited about this, it’s important to remember the many ways that different communities can understand, define and measure outcomes. The way that programs are designed and measured should be decided in close collaboration with community. While standardised measurement tools and RCTs have their place, a program or intervention that works for one cohort may not be appropriate for another.

This point was made best by Conference Chair and international affairs analyst, Stan Grant, who reflected honestly that as a child, he lived with many of the risk factors that predict poor life outcomes – living itinerantly, attending multiple schools, and having an incarcerated parent, not to mention the intergenerational trauma of the stolen generation. Today, a ‘Safety and Risk Assessment’ may have allocated him as a ‘high-risk’ child. Despite this, he stressed that ‘the very worst thing that could have happened’ would have been for the state to remove him from his family. He believed that the genuine love and care of his family outweighed the risk factors. Stan’s case, like many, underscore the need to recognise non-western views on safety (and other outcomes more broadly).

At the conference, ARTD also showcased our recent successful experience of working with, not for, community under TFM’s Aboriginal Evidence Building Partnership pilot. The pilot aim was to build an evidence base of promising programs and services that are improving outcomes for Aboriginal children and families. It did this by linking Aboriginal service providers with evidence building partners to work together to build providers’ data collection and evaluation capabilities.

Key to the success of our two pilot partnerships was our commitment to a partnership approach – establishing mutual trust and agreed ways of working early on, communicating regularly and openly, rescoping workplans to best meet current and future needs, and demonstrating an unwavering commitment to capacity-building and self-determination.

While the pilot required providers to embed standardised, validated tools to measure wellbeing outcomes, we worked closely with them to identify their additional data collection needs. We did this to ensure the data collected reflected what was most important to the service and its community and could be used on the ground to inform and improve program delivery. We are excited about the rollout of the pilot this year, and TFM’s commitment to a partnership-driven and capacity building approach to ensuring the service system meets the needs of Aboriginal children and families.

Conference keynotes also shone light on some of the other encouraging achievements of TFM to date. These included having more than 1,000 families engaged in new family preservation and restoration programs and establishing the first, human services cross-agency, longitudinal (+25 years) data set in NSW, providing large scale, de-identified matched data.

TFM Program Director & Investment Approach Lead, Campbell McArthur, said that beyond having what he deemed ‘the best dataset in Australia’, it’s the insights that the data can bring us that are truly exciting. The commitment to evidence means we are better able to compare and contrast the experiences of different cohorts, to better understand what works, for whom, in what circumstances. It also allows the system to identify population-level trends earlier and take evidence-based responses.

Attended by over 700 government and non-government representatives, we left the conference ready for the work ahead under TFM. Director for Children and Families in Scotland, Michael Chalmers, reminded us that ‘joining up services and creating change takes time and is difficult. But its important to keep your eyes on the prize: improving outcomes for our most vulnerable children and young people’.


How can we measure empowerment?

Aricle Image for How can we measure empowerment?

February 2019

By Ruby Leahy Gatfield

Empowerment has emerged as another buzz word. While it is often paid lip service, what does empowerment really mean? And how do we know if we’re achieving it?

Empowerment is a complex concept. It is both a process and an outcome that can be seen at the individual, organisational and structural levels, enabling positive growth and sustainable change.[1] In the context of Aboriginal and Torres Strait Islander communities, who have experienced a history of systematic oppression, empowerment is critical for healing and improving overall wellbeing.

While there are many programs and interventions designed to empower Aboriginal people and communities, there is little quantitative evidence about their impact. Here’s where the GEM – the ‘Growth and Empowerment Measure’ – steps in.

On 1 February, Melissa Haswell from the Queensland University of Technology, facilitated GEM training at the National Centre of Indigenous Excellence. She explained that the GEM is the first validated, quantitative tool designed to measure empowerment in Aboriginal communities.

Based on extensive consultation, the tool measures different dimensions of empowerment as defined by Aboriginal people. It also aims to be a strengths-based and empowering process in and of itself, using scenarios as a way for people to trace their personal journey.

So, what does the GEM mean for evaluators? Firstly, it is a culturally-safe tool, developed by Aboriginal people, for Aboriginal people. When working with Aboriginal people in social research and evaluation, it is crucial that we use culturally-appropriate methods that:

  • respect Aboriginal knowledge and practices
  • are strengths-based
  • are framed by an understanding of the historical context of Aboriginal communities and its ongoing impact
  • benefit the community.

The GEM offers just that. It also comes in the context of the growing development of other validated and culturally-appropriate measurement tools, such as Dr Tracey Westerman’s upcoming tool for measuring the cultural competencies of child protection staff. 

More broadly, the GEM also enables us to quantifiably measure more holistic wellbeing outcomes, beyond discrete system indicators, such as increased school attendance or instances of out-of-home care. It recognises empowerment as fundamental to the overall health and wellbeing of individuals and communities, giving programs and services a better understanding of their impact. This is particularly important, given the lack of evidence about ‘what works’ for improving the wellbeing of Aboriginal people, families and communities.[2][3]

To support Aboriginal organisations to embed the use of the GEM and other subjective measures of wellbeing, we are working with Their Futures Matter to develop tools and resources to support evidence building across the Aboriginal service sector. Stay tuned for more…


[1] Haswell, M.R., Kavanah, D., Tsey, K., Reilly, L., Cadet-James, Y., Laliberte, A., Wilson, A., & Doran, C. (2010). Psychometric validation of the Growth and Empowerment Measure (GEM) applied with Indigenous Australians. Aust N Z J Psychiatry. 44(9): 791–9. Available at: https://www.ncbi.nlm.nih.gov/pubmed/20815665

[2] Productivity Commission. (2016). Overcoming Indigenous Disadvantage: Key Indicators 2016. Canberra: Productivity Commission.

[3] Stewart, M. & Dean, A. (2017). Evaluating the outcomes of programs for Indigenous families and communities. Family Matters. No. 99, pp.56–65.


The future of evaluation is within

Aricle Image for The future of evaluation is within

January 2019

By Jack Rutherford

When my friends and family ask me what my job is, I say something along the lines of, “I work at a public policy consulting firm. We mostly evaluate government policies and programs, and we gather our data mostly through surveys and interviews.” Knowing that I majored in biology, they tend to follow this by questioning whether I want to work in a field they see as so far removed from biology and science.

While the work I do to at ARTD feels meaningful and fulfilling, I’ve been left wondering… will I be able to access the parts of biology that I love while working in public policy?

Can we marry biology and evaluation?

When I saw that the 2018 ACSPRI Social Science Methodology Conference was discussing how to integrate social and biological research, I jumped at the opportunity to attend.

I attended the first day of talks at the University of Sydney on 12 December. The conference featured diverse expertise from a range of local and international backgrounds, including Naomi Priest from the Australian National University and Melissa Wake from the Murdoch Children’s Research Institute, Tarani Chandola from the University of Manchester and Michelle Kelly-Irving from the Université Paul Sabatier in France. Their talks highlighted ways biological concepts and methodologies have and can contribute to social research, with a focus on the use of biomarkers in social studies.

What are biomarkers?

To many, biomarkers are a new concept. Put simply, a biomarker is an objective measure of biological processes. For example, increased blood pressure can be used as a biomarker for increased levels of stress.

Chandola explained that the benefits of biomarkers include reducing measurement errors that can arise in surveys and being able to tell more holistic stories than those derived purely from self-reported data. In the example of stress, participants may underreport how stressed they feel or report not feeling stressed at all, while their blood pressure and other biomarkers suggest otherwise.

What do biomarkers mean for evaluation?

Because biomarkers can be used to determine the effects of the social environment on humans, they can be used in evaluation to provide enlightening data on complex social issues. For example, Kelly-Irving spoke about how adverse childhood experiences impact the physiology of the adult. According to her research, adverse childhood experiences tend to become more frequent with increasing social disadvantage. These experiences have neurodevelopmental impacts, which can affect health mediators (such as one’s likelihood of smoking, their BMI etc.) and social mediators (such as educational attainment), which all later influence mortality.

Priest also presented findings showing how different types of racial discrimination affect children’s health by increasing their BMI, waist circumference and blood pressure.

If biomarkers can be used as indicators of the effects of the social environment, and a public policy or program aims to affect social change, then biomarkers can be used to measure the effectiveness of said policy/ program. For example, if we were evaluating a program designed to combat adverse childhood experiences, we could compare the biomarkers of those who took part in the program with those who did not. 

While it may seem complicated, biomarkers have the potential to enable social researchers and evaluators to draw clearer pathways between cause and effect. This is particularly useful in an increasingly complex social environment.

Are there any risks?

Despite their benefits, biomarkers don’t come without their own set of challenges. For one, sampling participants’ biology needs a heavy consideration of ethics regarding consent, risk, and data and sample security. Asking participants for samples of biomarkers may result in increased opt-outs and reduced sample sizes, therefore impacting the predictive power of significance testing.

Sampling methods for certain biomarkers may also be time and resource intensive, particularly when training is involved.

It’s also important that this sort of research isn’t used to exacerbate inequalities any further, i.e. that results are not presented in a way that provides certain groups with perceived biological evidence for their prejudices.

Using biomarkers also requires careful consideration of the conceptual framework. Before incorporating biomarkers into a project, evaluators must be certain they are valid indicators of the particular aspect of the social environment. Sometimes one biomarker is not descriptive enough. Indeed, Kelly-Irving found that a model comprised of many biomarkers was better than a model using just one in determining the impacts of adverse childhood effects.  

What does the future hold for biology in evaluation?

The conference suggested that as our collective understanding of biological concepts, methodologies, and data increases, it will become easier to integrate the biological and the social. Social research and evaluation will benefit from a rich data source, with potential to support our understanding.

Generation Victoria, an ambitious project directed by speaker Melissa Wake, aims to gather biological and social data from all children born in Victoria between 2021 and 2022 with the goal of addressing social epidemics like school failure, obesity and mental health. Far-reaching projects like this become possible by merging the biological and the social.

On a more personal level, I find it exhilarating that I might be able to marry my passion of biology with the work I do at ARTD. Moving forward, I aim to look for and shape opportunities to integrate this thinking into our work, and I encourage you to do the same.

In using our own biology as measures of the effects of the social environment, the future of evaluation is, quite literally, within us all!