Top ten common mistakes in indicator tracking and how to avoid them
Indicator tracking is essential for your NGO or nonprofit if you wish to be able to demonstrate progress and results but also improve decision making. However, a poorly designed system can generate more noise than insights and waste your M&E team’s time or even worse provide you with misleading information. In this article, we look into some of the most common mistakes an organization might make when crafting their indicator tracking strategy and approach and the ways you can tackle them.
1. Collecting too many indicators
If your teams try to measure everything they will soon be overloaded with data they won’t be able to handle. Tracking too many indicators burdens your teams and makes them lose focus. Instead of long lists of indicators to track, prioritize those that directly reflect the core objectives of your project/program. To keep track of everything, categorize them into ‘must know’ and ‘nice to know’ indicators and focus on the former.
2. Using vague definitions
Broad definitions like “increase awareness among beneficiaries” lack clarity and cannot be measured. Each person in your team might interpret them differently and proceed towards a different direction. Instead of vague definitions craft SMART indicators which are Specific, Measurable, Achievable, Relevant and Time-bound. Define the source, method of calculation and other information that will help your team understand the indicator.
Use an indicator reference sheet like the example below to guide you.
Indicator | Indicator definition | The calculation method | The indicator type and the data type | Indicator disaggregation | Data flow | Baseline and target | Data limitations |
---|---|---|---|---|---|---|---|
1. Percentage of pregnant women receiving at least four antenatal care visits | Proportion of women with confirmed pregnancies who attend four or more antenatal care (ANC) visits during pregnancy. Excludes virtual or phone-only consultations. | (Number of pregnant women with ≥ 4 ANC visits ÷ Total number of pregnant women registered) × 100 | Snapshot (%), Decimal | Age group (<20, 20–34, 35+), urban/rural residence | Data source: clinic registers → collected by health staff → entered into digital information system monthly → checked by data officer → aggregated and reported quarterly | Baseline: 45% (2024 Q1); Target: 70% by end of 2026 | Underreporting possible if women visit private clinics not in registry; definition excludes phone consults |
2. Number of children under 5 fully immunized by 1 year of age | Total count of children under five who received all recommended vaccinations (per national schedule) by their first birthday. Excludes vaccinations after age 1. | Summation of children meeting full immunization criteria by age 12 months during the reporting period | Incremental (count), Integer | Sex (male/female), geographical zone (north, south, east, west) | Data captured at health posts → monthly compiled by district HMIS → verified by immunization supervisor → entered into central database → analyzed and reported monthly | Baseline: 8,200 children (2024 period); Target: 10,000 children by Dec 2025 | Some remote clinics report late; immunization records may be incomplete or doses unrecorded |
3. Not using baseline data
Baseline data offer a clear starting point without which you can’t measure the progress of your work. Without them it becomes hard to set realistic targets and we have no good reference points to measure change over time. So always establish baseline data at the start of the project/program. Take a look at the article “The role of baseline data in indicator tracking and how to set it properly” for more information.
4. Not collecting disaggregated data
Aggregated data lack the categories that will help you uncover valuable insights such as inequalities, vulnerable groups and points of actions. Compare reporting for the indicator “80% of students pass the final exam” with and without break-downs, such as gender and region. To avoid this, ensure that you collect data by key categories such as geography, gender, age groups or other based on your project’s goals.
5. Not collecting qualitative data
Qualitative data uncovers the “why” behind the “what” quantitative data shows. These can be in the form of stories, notes, comments, or general context. To ensure your teams don’t ignore their importance, bring together quantitative and qualitative data collection; for example, combine surveys with interviews, focus groups or case studies.
6. Not linking indicators to objectives
This is also mentioned in the first point. If your teams collect data that aren’t related to the outcomes of your project/program, their effort is wasted. You might end up with vanity metrics that might look nice but don’t provide proof for the impact your project/program is achieving. To avoid this, always make sure that each indicator is mapped to a specific result in your Logframe/Results Framework which should be linked to your Theory of Change.
7. Having infrequent data reviews
If you wait until the time for the annual report is near to look at the data you and your teams have collected, it might be too late to correct your course. Instead, schedule regular reviews on a monthly or quarterly basis to go over your data. Use dashboards to provide overviews for everyone so that they can ask questions related to the data they see.
8. Not assigning clear responsibilities
If it’s not clarified from the start who is doing what, then everyone will assume that the responsibility of managing indicators falls to someone else. This will lead to gaps, inconsistencies or duplication of effort and data. To avoid this, assign specific roles related to indicator tracking and create documentation that will help clarify expectations and accountability.
9. Not adapting indicators to context changes
This became evident during the COVID-19 pandemic, when priorities changed overnight. Inflexible systems that don’t allow you to adjust the indicators you are tracking may cause delays and render your project/program outdated and irrelevant. To avoid this, design M&E plans with flexibility, consider how indicators would be changed if needed and allow mid-term reviews to assess if a refinement on the selected indicators is needed.
10. Collecting data that remain unused
If the data you collect simply serve to write reports and then are archived or forgotten, your teams might end up feeling demotivated not seeing all their work being put in action. Instead, use the collected data for learning and decision making, integrate it into future planning and celebrate progress with those involved.
Making such mistakes in indicator tracking might be common but it is also preventable. You can turn indicators into powerful decision making tools for your projects/programs as long as you track the indicators that matter and use the indicators you track.
ActivityInfo is being used for indicator tracking by organizations worldwide as it offers the tools to help you manage the complete data or indicator lifecycle. If you are interested in learning more, you can always contact us for a tailored demonstration.