Indicators - From project-level to strategic reporting in humanitarian and development settings
HostZeíla Lauletta
PanelistFiras El Kurdi
About the webinar
About the webinar
How can we ensure that the data collected at the project level effectively informs strategic decisions at the headquarters? A common challenge for many organizations is the disconnect between field-level activities and high-level strategic objectives.
In this Webinar, we will explore the complete journey of an indicator, from its inception in a project's logical framework to its aggregation and use in strategic reporting.
We will examine practical approaches for aligning indicators across all levels of an organization to create a cohesive and powerful M&E system and we'll discuss common challenges, proven solutions.
In summary, we’ll cover:
- The crucial role of Indicators: The strategic importance of a coherent indicator framework
Understanding the Indicator landscape:
- Defining different types of indicators (e.g., input, output, outcome, impact)
- Mapping indicators across levels: project, program, country, organizational, and strategic (HQ)
Common challenges in Indicator management:
- Inconsistent definitions
- Aggregation difficulties
- The 'indicator overload' phenomenon.
Approaches to a Cohesive Framework:
- Strategies for standardizing processes, technology, and information flows from the country level to global headquarters.
- Practical techniques for ensuring vertical and horizontal alignment, connecting field activities directly to strategic goals.
View the presentation slides of the Webinar.
Is this Webinar for me?
- Do you wish that the data collected at the project level effectively informs strategic decisions at the headquarters?
- Would you like to have the tools to tackle the disconnect between field-level activities and high-level strategic objectives?
- Are you looking for guidance on indicator tracking so as to support your organization's impact monitoring?
Then, watch our Webinar!
About the Presenters
About the Presenters
Zeíla Lauletta is a Monitoring and Evaluation specialist with extensive experience in international development and humanitarian response. She has worked with the UN system and international NGOs, leading data-driven evaluations, evidence generation, and participatory monitoring initiatives. Zeíla holds a Master’s in International Affairs from the Graduate Institute in Geneva and an M&E certification from the ILO International Training Centre.
Firas El Kurdi is an Implementation Specialist at ActivityInfo. He holds a B.S. in Mechanical Engineering from the University of Balamand and certifications in MEAL for NGOs (AUB’s Global Health Institute) and Google Data Analytics. Previously a Data Analyst and M&E Officer with NGOs including the Restart Center, he supported education, health, and protection programs in Lebanon for refugees, torture survivors, people with disabilities, and others affected by war and gender-based violence, funded by UN agencies and PRM. He brings a passion for data-driven decision-making to help organizations deploy ActivityInfo effectively.
Transcript
Transcript
00:00:04
Introduction and Agenda
Thank you and welcome today. Let's quickly map out what we are going to speak about. First, we will start with our introduction, mostly around indicators. Then we will dive into the main topic, understanding indicators across types and levels. After that, we will get a little bit practical by exploring some common challenges and possible solutions, and see how this works in the real world with a case study from ActivityInfo. We will wrap everything up with a Q&A and discussion.
00:00:43
The Difference Between Being Busy and Being Effective
Before we dive into the main topic, I want to start by asking you two simple but important questions. The first one is: are you busy? For most of us in this line of work, that is an easy one to answer. Of course, you are busy. You can probably tell me right now that you are juggling three different projects, you have trained 500 people, and you are in the middle of distributing 10,000 kits. We are fantastic at quantifying our busyness.
However, if you move on to the next question, it is a little bit harder: are you making a difference? This question is not really about what you did, but rather what you changed. It is a question that really matters. It is the one our donors are asking, the one our boards want to know, and the one the communities we serve have every right to ask. If we are being honest, it is the question our own conscience asks us at the end of a long week.
This brings us to a fundamental challenge in our work. When someone asks, "Are you busy?" we respond with a list of activities. When they ask, "Are you making a difference?" we need something more. We cannot just list activities; we need proof. We need to tell a compelling story that is backed by real, credible evidence. There is often a huge gap between what we do—the activities—and what we ultimately aim to change—our impact. It is a gap between motion and progress, between being occupied and being effective. We can run dozens of workshops, but did anyone actually learn anything? We can distribute thousands of nets, but did malaria rates actually go down? This gap is where our real story lies, and we need a way to bridge it.
00:02:38
Defining Indicators and the SMART Criteria
That evidence, that proof, comes down to one place: the indicators. I want you to think of indicators as the measurable signals that build a strong bridge between our daily activities and our long-term impact. They are the tools that allow us to translate our hard work into a language that everyone can understand. With good indicators, we transform our story from "We were very busy last year" to a more powerful statement: "We demonstrably improved lives in this specific, measurable way."
Let us get a formal definition on the table. An indicator is a unit of measurement that specifies what will be measured to judge whether our desired outputs, outcomes, or impacts have been achieved. They can be both quantitative (the numbers) or qualitative (the stories and changes in quality). Indicators serve three critical purposes: they translate our big, ambitious goals into concrete, measurable criteria for success; they allow us to track change over time, which is crucial for learning and adapting; and they provide the hard evidence we need for accountability to our donors, partners, and the people we serve.
You have likely heard of the SMART criteria before. It is a classic for a reason. A good indicator must be Specific, Measurable, Achievable, Relevant, and Time-bound. This is a simple checklist to ensure our indicators are powerful and practical.
00:05:16
Understanding Indicator Types and Levels
Now that we have a solid foundation of what an indicator is, let's start peeling back the layers. It is not just about having an indicator; we need the right indicator. To find it, we need to understand how they work across different types and different levels. We are going to tackle how these two concepts intersect and how you can use them to build a powerful framework for measuring what matters.
On one hand, we have indicator types. When we talk about types, the focus is on what kind of change is being measured. Is it the immediate delivery of our activities (output)? Is it the medium-term change in behavior (outcome)? Or is it the long-term change (impact)? This is often called the result chain. On the other hand, we have indicator levels. Here, we focus on the scale at which we are measuring. Is this indicator for a specific local project in one community? Does it roll up to a broader organizational goal? Does it contribute to national statistics, or does it link all the way up to global targets like the Sustainable Development Goals?
00:07:08
The Results Ladder: From Activities to Impact
Before we can speak about the different types of indicators, we need to be crystal clear on the different types of results they measure. We use a framework called the results ladder. This illustrates the pathway from our direct action all the way up to the grand change we hope to see.
At the top, we have the overall goal or impact. This is the big picture, long-term change we hope to contribute to. I use the word "contribute" deliberately because impact is rarely something a single project or organization can achieve alone. For example, this could be reducing suffering in humanitarian crises or improving food security across an entire region.
Coming one step down, we have the outcome. If impact is the big ultimate goal, outcomes are the specific, tangible changes in behavior, knowledge, or condition that happen for our beneficiaries because of our project. Examples include people with mental health problems actively using appropriate care services, or smallholder farmers adopting improved agricultural practices.
Finally, at the bottom rungs, we have activities and outputs. These are directly within our control. Activities are the actions we take—we train, we distribute, we conduct. Outputs are the direct, tangible results of those actions. For example, if our activity is to train health staff, the output is that 50 health staff were trained. Activities are what we do, while outputs are what we deliver.
00:10:42
Applying Indicators to the Results Ladder
Each step of this results ladder needs its own kind of indicators. For the impact level, if the result is improved mental health, the indicator might be the rate of moderate to severe mental health disorders in the target population. For the outcome level, if we want individuals to access subsidized services, the indicator could be the percentage of treated individuals who demonstrate a significant reduction in symptom severity scores. At the output level, a simple indicator would be the number of individuals who received a subsidy.
In reality, it is rarely a simple linear change. It often looks more like a cascade. Multiple outputs, like training sessions and material distribution, are designed to achieve several immediate outcomes, such as increased knowledge or improved access. These immediate outcomes build on each other to contribute to long-term outcomes, representing sustainable change. Finally, these long-term outcomes contribute to the high-level impact.
00:13:24
Indicator Levels: From Project to Global
Just like with results, indicators exist in a hierarchy of levels. At the bottom, we have local project-level indicators. These feed into program-level indicators, which then roll up to organizational-level indicators. These often align with national indicators and ultimately contribute to global indicators. Not every organization will have indicators at every single level, but this cascade shows how pieces connect from the ground to the global stage.
At the very top, we have global indicators. These are metrics agreed upon at the international level, usually tied to frameworks like the SDGs. Their purpose is to track collective progress and compare across countries. Coming down a level, we have national indicators. This is where governments localize global goals into their own development plans. Instead of just tracking undernourishment broadly, a national strategy might focus on the percentage of children under five who are stunted.
Organizational indicators are metrics used by NGOs or UN agencies to track progress against their own high-level goals. Their purpose is to aggregate results from many different projects to show sector-wide impact. Beneath that, we have program-level indicators, used to track progress across multiple related projects under one program area, like health or education. Finally, we have project-level indicators, which are the most granular metrics for a particular intervention in a particular location, giving real-time data on outputs and immediate outcomes.
00:18:10
Strategic Indicators
You might have heard the term "strategic indicators." It is tempting to see it as just another rung on the ladder, but a strategic indicator is the core of why we measure. It is a high-level, long-term outcome metric that tracks progress toward an organization's mission. It aggregates results across programs and countries to give a big-picture view.
Its purpose is threefold: it helps align priorities and resources to the most important goals; it is a powerful tool for accountability; and it gives crucial insight into trends over time. For example, a strategic indicator for a WASH organization might be to increase the percentage of its target population using safely managed drinking water from 45% to 60% by 2028. It is a forward-looking goal that drives the organization's work.
00:22:40
Humanitarian vs. Development Indicators
It is crucial to distinguish between humanitarian and development indicators. While they both aim to measure progress, they operate on different timelines with different goals. Humanitarian indicators measure the scale of an acute crisis and the results of immediate, short-term relief efforts. The focus is on inputs, outputs, and immediate outcomes—like lives saved and suffering reduced right now.
Development indicators measure long-term human well-being, economic progress, and social change. Their purpose is to guide long-term policy and address root causes of poverty. The focus is on broader outcomes and impacts measured over years or decades.
For example, in education, a humanitarian indicator might be the number of crisis-affected children enrolled in a temporary safe learning space. A development indicator would be the percentage of children successfully finishing primary school. In food security, a humanitarian indicator tracks households receiving food assistance for immediate survival, while a development indicator tracks households able to afford nutritious food year-round without aid.
00:30:50
Challenges in Scaling Up Results
Moving from theory to practice isn't always straightforward. When results start to travel upward, things get tricky. Frameworks don't always align, donors ask for different things, and data systems may not be strong enough to bring everything together. Sometimes, local realities get lost in translation.
One common challenge is misalignment between levels. Imagine a local education project that trained 200 teachers. When reporting time comes, the organization is asked to show how these results contribute to a higher-level indicator regarding the percentage of youth with digital literacy. The project data (outputs) and the organizational indicator (impact) don't line up. They are measuring different things at different scales.
This happens because lower-level indicators are focused on outputs and are flexible and localized, while higher-level indicators focus on broader outcomes with strict definitions. We end up with two parallel systems: project data that is rich but not usable at the strategic level, and strategic indicators that don't reflect ground realities.
00:34:10
Solutions for Alignment
The solution is to plan for alignment in advance. This starts with clarity—using common indicator definitions so terms like "literacy rate" mean the same thing across the organization. Adopting indicator libraries, such as IASC or SDG frameworks, promotes consistency. Focusing on a manageable set of key indicators allows for comparison across programs.
Crucially, we must define a pathway of change. We need to focus on the bridge between outputs and outcomes—the intermediate results. In our education example, teacher training (output) might improve the integration of digital tools into lessons (intermediate result), which then boosts students' digital skills, contributing to overall digital literacy (impact). Designing indicators to capture these intermediate results makes the link clear.
00:37:00
Challenge: Capacity Limitations and Standardization
Another challenge is capacity limitations. In post-emergency contexts, teams often have limited trained Information Management (IM) staff and no standardized tools. Data comes in inconsistently. When HQ requests disaggregated results, teams have to reconstruct missing information, leading to frustration.
Solutions involve people, processes, and tools. For people, projects need to plan for data capacity early, maintaining a roster of trained staff. For processes, standardization is key. Clear methodological notes ensure consistency across teams. For tools, using shared data collection platforms that allow disaggregated reporting ensures alignment with requirements.
00:40:33
Challenge: Fragmentation and Donor Requirements
A familiar situation in large humanitarian responses is fragmentation. Several donors contribute funding, but each comes with their own indicators, formats, and timelines. Field teams spend huge amounts of time reformatting the same data. This leads to a patchwork of reports that don't add up to a coherent picture.
Solutions rely on coordination. Encouraging donor coordination can reduce reporting streams. Adopting shared reporting templates and aligned schedules cuts down duplication. Aligning donor frameworks with national monitoring systems ensures project data contributes to national reporting.
00:43:00
Challenge: Representation and Context Sensitivity
Finally, there is the challenge of representation. Even when data is collected correctly, aggregation can hide the reality on the ground. For example, national statistics might show high school attendance, but local data reveals irregular attendance among marginalized groups.
To solve this, we must design indicators that reflect local reality. Engaging local staff and communities ensures measures are meaningful. Disaggregating data by region, gender, and other factors reveals patterns hidden in averages. Combining qualitative insights with quantitative data helps explain why gaps exist.
00:49:14
Practical Demonstration: Tracking Indicators in ActivityInfo
We can track these indicators using an information management system like ActivityInfo. In our standardized setup, we have lists of countries and donors defined from the start so we aren't re-entering names every time. We can register specific projects and add information to a logical framework, selecting whether a metric is an impact, output, or outcome, and linking it to strategic indicators.
When doing monthly reporting, you select the specific indicator from the library you defined. You enter the month, the country, and qualitative data regarding progress. The system prevents duplicate reporting for the same indicator in the same month. You can view the baseline, the current value, and the target.
Visualizing this data can be done at different levels. At the project level, you can create a tracking table for a specific project. At the program level, you can aggregate across multiple projects. You can also view data at the national level, aggregating by country, or at the organizational level to see the total number of people supported across all countries and programs.
For primary data collection, you can create specific forms, such as a teacher registration form or a training attendance form. By referencing these lists, you ensure consistency. For example, you can track pre- and post-test scores for students linked to specific teachers. This allows you to calculate percentage changes and verify if the training provided to teachers is actually resulting in improved student skills.
01:06:40
Q&A Session
Q: Can we link projects and programs using external links or collection links? A: Yes, definitely. You can use collection links, similar to how Google Forms works, to allow people to report data without needing full access to the system.
Q: Can we perform data analysis using tools like Power BI, Stata, or SPSS? A: Yes. While ActivityInfo covers the whole data lifecycle including analysis, you can link the system directly to Excel, Power BI, or other software using the API. This allows for specialized analysis without manual exporting and importing.
Q: Can this work on mobile devices? A: Yes, it works on mobile devices and supports offline data collection. Data syncs automatically once the device reconnects to the internet.
Q: How do we handle project budgets and activities? A: You can define budget fields within the project setup. You can also use sub-forms to track installments or budget expenditure against project progress.
Q: Are indicators selected from a list or manually input? A: It is best to define an indicator library (an "indicator bank") first. Then, when setting up a project logframe or reporting, you select from this pre-defined list to ensure consistency. However, you can also enter project-specific indicators manually if needed.
Sign up for our newsletter
Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.