Learning snapshot
The purpose of evidencing outcomes and evaluation: why clarity matters
This article explores the key purposes of evaluation, and why defining purpose is essential for meaningful evidence and impact.
23 April 2026
Why is evidencing outcomes important?
Imagine you’re delivering a pilot program and being asked for data and evidence by a funder, your board, and participants. Each group wants something slightly different and all of it feels important. This is a common situation, and without clarity, it can quickly become overwhelming. Making the wrong assumptions can lead to expectations not being met for these stakeholders and raises a key question: what is the evidence actually for?
Presenting evidence of outcomes matters because it helps us all make better decisions and achieve greater impact. It supports organisations to learn and improve, enables funders to maximise value and accountability, strengthens collaboration across the sector, and ultimately leads to better and more lasting outcomes for people and communities. Read more about the benefits of measuring social impact here.


What is evaluation?
Evaluation is a specific, in-depth approach to evidencing outcomes. It is a systematic process of posing questions to determine the merit, worth, value or significance of an initiative. Evaluation questions represent lines of enquiry that direct the selection of tools, methods and analytical approaches designed to answer them. In this fashion, evaluation explores the conditions under which outcomes occur and often links this to broader contexts (e.g., systems and comparisons).
Evaluation can take different forms, such as:
- Formative evaluations, that test initial designs and implementations
- Summative evaluations, that test if a mature program has achieved outcomes
- Monitoring, evaluation and learning frameworks that aim to embed evaluative practice into initiative delivery for ongoing utilisation
Read more here.
Why purpose matters
Because evidencing outcomes is so important, it is equally important to understand the different reasons you might need to collect evidence. So, let’s shift to a discussion about purpose as a way of answering the key question raised above: what is the evidence actually for?
While this article focuses on the purpose of evaluation, the ideas can apply to all data collection you do, particularly outcomes data.
Evaluation works best when it is embedded in the organisation and the initiatives it supports, rather than added on at the end. When evaluation is part of everyday practice, the effort put into it is far more likely to pay off and be used in real decisions. But this only works if you are clear about why you are evaluating in the first place. Clarifying the purpose of your evaluation, and designing with that purpose in mind, helps keep data collection focused, manageable, and relevant.
Most projects will have one or two primary evaluation purposes. Being clear about this early will shape what data you collect, how often, and who is involved. With a clear purpose, your evaluation will be:
- proportionate (not bigger or more burdensome than needed)
- strategically targeted (asking the right questions)
- ethical (respecting participants’ time and data)
- useful (informing decisions, learning, advocacy, or investment)
- aligned with the needs and expectations of your organisation and stakeholders
8 common evaluation purposes (the 8 A's)
Below are eight common evaluation purposes that we have identified (also known as the 8 A’s), adapted from CSIRO’s 4 A’s of Impact Evaluation. It is important to clarify that the 8 A’s describe evaluation purposes, not evaluation types.
-
Alignment
-
Advocacy
-
Allocation
-
Analysis
-
Accountability
-
Adaptation
-
Anticipation
-
Actualisation
Evaluating for alignment focuses on ensuring that activities, resources, and outcomes match your organisation’s strategic priorities, funder expectations, or broader sector or system goals. It helps verify that the work being done contributes meaningfully to the direction your organisation or partnership is heading. You can also use it to test whether your organisation or project is experiencing mission drift. An evaluation with this purpose can also reveal duplication, gaps, or opportunities to better position the initiative within a wider strategy.
For example: A youth mental health organisation evaluates its outreach projects to assess how well they align with its new strategic goal of improving early intervention pathways. The evaluation reveals that while staff are delivering many workshops, few directly support early help‑seeking behaviours. The findings prompt a redesign of outreach activities.
An evaluation undertaken for the purpose of advocacy generates evidence to raise awareness, build credibility, and strengthen the case for action, be it policy, system, or practice change.
Such evaluations often aim to increase the visibility of changing conditions, highlight the prevalence of issues or surface unmet or unaddressed needs. They may include systems-focused analysis that examines how inter-related services, sectors, policies and stakeholders work, or fail to work, to address identified needs. These evaluations typically combine data with compelling narrative to persuade decision-makers, including funders, partners, and the broader public to respond to identified issues.
For example: A community food program highlights rising demand for their services, demographic changes, and participant stories through an evaluation used to advocate for a local food security strategy. The evaluation informs media coverage and helps secure a council commitment to coordinated action.
Evaluations undertaken to inform allocation decisions support choices about where to invest resources such as time, funding, staffing, or infrastructure. They identify which activities are most effective, efficient, or impactful, and which may need redesign or discontinuation. These evaluations help organisations use limited resources wisely. They can also support a case for investment to external funders.
This is where econometric approaches, such as cost-benefit and efficiency are considered. Social Return on Investment (SROI) approaches could be considered a blend of allocation and advocacy purposes.
For example: An arts organisation runs three different workshop streams - visual arts, music, and digital storytelling. An allocation‑focused evaluation assesses participation, engagement levels, and outcome data across the three streams. The evaluation finds that digital storytelling generates the strongest improvements in social connection and confidence, especially among young people. The organisation reallocates staff time and materials budget to expand this stream.
Where the purpose is analysis, the evaluation seeks to understand what is working, for whom, and how. It explores mechanisms, processes, contextual factors, and the interplay between program components. They often involve deeper qualitative data and help test assumptions or refine a theory of change.
For example: A school‑based wellbeing program runs an evaluation to discover why engagement in the program varies between students. The findings reveal that consistent relationships with a single trusted adult, rather than the activity content, is the strongest driver of impact, informing significant program redesign.
Accountability is about demonstrating that commitments have been met and resources used responsibly. Accountability responds not only to funder conditions and contractual obligations, but also to expectations of governance bodies, delivery partners, stakeholders, and community members about how decisions are made and impacts achieved.
It shows how the resources provided by funders contribute to outcomes observed. When designed well, accountability is not a “tick‑box exercise”; it can also generate meaningful learning, improve practice, and strengthen trust.
For example: A local conservation group receives funding to restore a wetland ecosystem. As part of the funding agreement, they must complete an evaluation reporting on indicators such as native species regeneration, weed reduction, and community volunteer hours. The findings are provided to the funder and published publicly to demonstrate responsible use of environmental restoration funds.
Adaptation focuses on learning and continuous improvement, particularly when there is a need for ongoing and regular data collection over time to track changing conditions and emerging trends. Crucially, the emphasis is on using these insights improve practice as work unfolds. Evaluations serving this purpose typically adopt approaches that can provide regular cycles of data collection, reflection and action, allowing teams to make informed adjustments as the work progresses. Adaptation is especially relevant for innovative, pilot, or complex initiatives that need to evolve in response to experience, uncertainty and emerging needs.
Approaches such as Monitoring Evaluation and Learning (MEL) or developmental evaluation are well suited to adaptation purposes. Monitoring tracks activities and indicators over time, Evaluation assesses effectiveness and value and Learning uses the evidence to improve practice and decision-making.
For example: A library introduces a digital inclusion initiative with weekly technology coaching sessions. Feedback collected each session reveals that many older participants feel overwhelmed by the pace. In response, facilitators slow down the content, add more practice time, and introduce tailored one‑on‑one coaching. The program becomes significantly more accessible and confidence‑building.
An anticipatory purpose supports forward-looking decision making, using evidence to forecast future needs, opportunities, or risks. It can support strategic planning, scenario development, and preparedness. They help organisations and systems act proactively rather than reactively.
For example: an arts organisation may undertake an evaluation to test emerging assumptions about changing audience preferences, such as a perceived shift away from large-scale events and toward smaller, more intimate and participatory experiences. The evaluation helps explore whether these trends are evident, for which specific audience segments, and what they may mean for future programming and funding approaches.
Actualisation operates on two interconnected levels. At the first level, evaluation can be regarded as a tool for empowerment, capability-building, and self‑determination. This purpose focuses on supporting the development of the people and communities that the evaluation interacts with. It includes practices such as fostering Indigenous data sovereignty, enabling co‑design, and ensuring communities are in control of their own narratives, data and interpretations of evidence. From this perspective, actualisation recognises how intentional design of evaluation can promote inclusion, agency and capability. This is particularly important when needing to design culturally safe evaluation approaches. Read the Australian Evaluation Society's First Nations Cultural Safety Framework here.
At the second level, actualisation is an explicit line of enquiry to the initiative itself, examining whether a program or strategy is designed to empower participants and support trust, reciprocity and shared decision-making. This includes assessing the extent to which participants have meaningful influence, whether relationships are built on mutual respect and whether the initiative strengthens people’s ability to act, lead and sustain change beyond the life of the intervention.
For example: An Aboriginal‑led organisation evaluates a youth leadership program using participatory methods, with young people co‑designing tools and owning the data. The evaluation not only measures impact but also strengthens leadership skills and ensures community control over findings.
Designing your evaluation approach and questions
Once you have determined the purpose of your evaluation, you can move onto designing your evaluation approach and questions. Strong and clear questions make data collection easier, not harder. Most evaluations serve more than one purpose, but usually one or two should take priority. So, be selective; you don’t need to answer every question across all purposes. Focus on those that match why you are evaluating in the first place.
Examples of evaluation questions for each purpose type are listed below:
|
The 8 A’s of purpose |
Key Evaluation Question examples |
|
Alignment Strategy |
To what extent is the initiative aligned to strategy? How well are intended and unintended outcomes contributing to strategic goals? |
|
Advocacy Making the case |
To what extent has the initiative increased awareness and visibility of the issue among key decision‑makers? How effectively does the evidence generated through the initiative strengthen the case for action? |
|
Allocation Directing resources |
Are resources being allocated to areas of greatest need or potential benefit? What is the balance of cost and benefit? |
|
Analysis: The outcomes |
What outcomes are being achieved, for whom, and under what conditions? Is initiative designed appropriately to achieve intended outcomes? |
|
Accountability: Meeting requirements |
How did the Lotterywest grant (or the funder) contribute to observed outcomes? Were the objectives of the approved purpose for the grant achieved? |
|
Adaptation: Ongoing learning |
Is the initiative positioned to remain relevant and responsive in a changing environment? Is the current practice aligned well with expectations of users? |
|
Anticipation: Planning |
Is the initiative building long-term resilience and preparedness for future change? Are the outcomes currently observed likely to continue into the future? |
|
Actualisation: Empowering |
To what extent are approaches enabling increased agency and capability? To what extent does the initiative build and reciprocate trust and self‑determination? |
Conclusion
Purpose is the anchor that shapes your evaluation design, questions, methods, data collection, and how the findings will be used. A well‑defined purpose sets the foundation for a meaningful, credible and genuinely useful evaluation.
Our Outcomes Evidencing Toolkit provides tailored suggestions for the types of data and evidence you might need to collect based on your intended grant size and priority area. This will support you in developing an effective Outcomes Measurement Plan, help you meet your grant acquittal obligations, and help you communicate your impact to your stakeholders and community.
Learn about wellbeing
Understand how your community is going to help you to better target and plan your project.
Ready to plan your project?
Understand your vision, plan your impact and report on the outcomes of your project with three easy interactive tools in the Community Impact Planner.
Acknowledgement of Country
The Western Australian Community Impact Hub acknowledges and pays respect to the Traditional Owners of the land on which we are based, the Whadjuk people of the Noongar Nation and extends that respect to all the Traditional Owners and Elders of this country. We recognise the significant importance of their cultural heritage, values and beliefs and how these contribute to the positive health and wellbeing of the whole community.