Lessons learnt evaluating grant programs

Many different types of programs are delivered through grants – from regional development through local infrastructure to bushfire responses. The intention is to select appropriate local organisations to deliver tailored responses to support outcomes in their local communities.

When it comes to evaluating grants, funders generally want to understand:

  1. the appropriateness of the design
  2. efficiency and effectiveness of planning and implementation
  3. the outcomes achieved.

Over 6 years of evaluating grants programs, there are five key lessons to effectively answering these questions and supporting improvement of future grants.

Lesson 1. Recognise the context in which the grant is operating

Often grant programs are implemented quickly to respond to a disaster and provide immediate support. This means we shouldn’t assess their delivery processes in the same way as a program that’s been in a business-as-usual context with more generous timeframes and resourcing.

When evaluations begin by understanding the reasoning behind a grant program and the context in which it was delivered, it’s easier to design the evaluation to help improve future processes. This could include focusing the evaluation on components of the grant that carry the greatest risk or uncertainty and where there more possibility of making improvements. You might, for example, speak to stakeholders that deliver similar grant programs to learn from their processes.

Lesson 2. Take stock

Start with the program logic – if there is one. Consider if the program delivery evolved and this should be updated to reflect reality.

Then consider what the evaluation should focus on, what existing data they have, and the key gaps to be filled. Given that grants can fund a wide range of projects, it’s important to find a way to capture the diverse outcomes. These diverse outcomes need should be communicated simply such as through aligning with the program logic or rubrics.

Lesson 3. Leverage existing data, but know its limits.

Often as part of grant programs, there is existing data available about the applications received, the assessment process and the progress and implementation of funded programs. But they often lack strong outcomes data because funded organisations have varying capabilities to collect this data or evaluation is not well-funded within the grant.

Learn what you can from existing data to understand processes and what was intended. For example, useful information can often be found in application forms in which grantees describe of the intended process if implementing the grant funding and their intended outcomes while progress reports describe what actually happened. Then you can ask organisations why they did what they did and what outcomes were achieved

Lesson 4. Keep it in proportion

There can’t be the same expectations about the level of data from small grants as large ones. Alongside this, we need to understand the different starting points for evaluation and how we can make it simple to engage with the evaluation.

Lesson 5. Talk to unsuccessful applicants

Whilst, unsuccessful applicants didn’t receive grant funding, their insights can contribute to improvements to processes. For grant evaluations suggested improvements by unsuccessful applicants have ranged from small tweaks to the guidelines or the application form, changes to the selection criteria or a greater understanding of the purpose of the grant funding.


Whilst evaluating grant programs can be challenge, evaluation provides an opportunity to learn about the appropriateness, effectiveness and efficiency of delivery. Gaining insights from program stakeholders provides useful learnings for the future iterations of the rebates or grants programs.

Receive our latest news and insights
  • This field is for validation purposes and should be left unchanged.