Applied ethics in program evaluation

Aricle Image for Applied ethics in program evaluation

April 2019

By Lia Oliver

As evaluators, ethics is key when considering if, when and how and we will conduct our projects. Ethical decisions shape the validity and reliability of the evaluations produced. We also know that clients of government social policies and programs are often vulnerable community members, and so its paramount that evaluators of these programs engage sensitively and ethically.

Our recent internal training, led by Brad Astbury, was a chance for us to refresh our understanding of ethics and flex our ethical reflexes through some challenging case studies.

The session reinforced why applied ethics in program evaluation is so important.

Evaluation is the most ethically challenging of the approaches to research inquiry because it is the most likely to involve hidden agendae, vendettas, and serious professional and personal consequences to individuals. Because of this feature, evaluators need to exercise extraordinary circumspection before engaging in an evaluation study. (Mabry, 1997, p.1)

Ethics is about more than HREC processes

In some evaluations, we must seek formal approval from a Human Research Ethics Committee (HREC), and this is often what evaluators think of when they think of ethics. HREC applications address ethical concerns, such as informed consent, voluntary participation, confidentiality, privacy, anonymity and benefits and risks of the research.

While these are all crucial, a successful ethics application does not guarantee an ethical evaluation. Evaluators need to consider ethics beyond the procedural aspects of an ethics application. Ethical awareness should be embedded throughout the duration of a project from planning processes and partnerships that recognise and explicitly address the need to be respectful to sharing evaluation findings with program stakeholders. This ensures the evaluation is honest, transparent, and credible.

Be wary of ethical fallacies

House (1993) outlined the ethical fallacies that frequently lead to ethical mistakes in professional evaluation:

  • clientism: doing whatever the client wants
  • managerialism: seeing program managers as the sole beneficiary
  • methodologism: believing that adherence to good methods is synonymous being ethical
  • relativism: accepting everyone’s opinion equally
  • pluralism/ elitism: giving a priority voice to the more powerful.

These ethical fallacies relate to the need to capture diverse stakeholders in evaluation so that we can understand the nature and role of different value perspectives. Some evaluation sponsors have powerful vested interests and it is important that evaluators can identify these to ensure they do not bias the findings. Often evaluators will need to balance the needs and voices of the client or program funder against the program beneficiaries to reach fair, accurate, representative and valid findings (House, 2008).

Normative ethical frameworks

Three main normative ethical viewpoints exist (Newman & Brown, 1996). These all relate to principles of autonomy, nonmaleficence, beneficence, justice and fidelity and often form the basis of codes of conducts that are used to identify appropriate ethical behaviour.

  • Consequentialist: drawing on utilitarianism to achieve the maximum happiness and the greatest good for the greatest number.
  • Deontological: Focusing on the morality of an action and whether that action itself is right or wrong, based on explicit rules or duties, rather than looking only at the consequences of the action
  • Virtue based: focusing on altruism.

Ethics in practice

For evaluators, ethical dilemmas can occur frequently across projects and sectors. Often, evaluators solve ethical dilemmas by referring to their intuition, talking to peers, as well as reviewing and drawing on past experiences.  Evaluators can also draw on relevant ethical guidelines such as the American Evaluation Association Guiding Principles for Evaluators (American Evaluation Association, 2018), the Program Evaluation Standards (Yarbrough et al., 2011) and the AES Code of Ethics (Australian Evaluation Society, 2013). Other useful resources include ethical decision-making flowcharts where you can map your ethical decision making to a particular action (Newman & Brown, 1996).

In our workshop, we discussed three different case studies in small groups. We identified the ethical concerns, developed a response each and discussed how the ethical concern would impact the evaluation. While each group had the same evaluation codes of conduct as a reference point, a range of solutions were developed.

We found that ethical dilemmas can be interpreted differently through the lenses of the different codes of conduct, ethical viewpoints and evaluators’ own experiences. When trying to resolve ethical dilemmas, there is never just one simple answer. In saying this, we cannot shy away from deep engagement and reflection on ethical practice in evaluation as this ensures our practice benefits and doesn’t harm participants, and findings are useful and credible. 

Reference List

American Evaluation Association (2018). AEA - American Evaluation Association : Guiding Principles for Evaluators. [online] Eval.org. Available at: https://www.eval.org/p/cm/ld/fid=51 [Accessed 15 Apr. 2019].

Australian Evaluation Society (2013). Code of Ethics. [online] Australian Evaluation Society. Available at: https://www.aes.asn.au/images/stories/files/membership/AES_Code_of_Ethics_web.pdf [Accessed 15 Apr. 2019].

House, E. (2013). Professional evaluation: social impact and political consequences. Sage Publishing.

House, E. (2008). Blowback: Consequences of Evaluation for Evaluation. American Journal of Evaluation, 29(4), pp.416-426.

Mabry, L. (1997). Evaluation and the postmodern dilemma. United States: Emerald Publishing Limited Jai Press.

Newman, D. L., & Brown, R. D. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.

Yarbrough, D., Shulha, L., Hopson, R. and Caruthers, F. (2011). The program evaluation standards: A guide for evaluators and evaluation users. 3rd ed. Thousand Oaks, Calif.: SAGE.