Part 4 of 5
Thursday February 16, 2023

From Theory of Change to database design for evidence-based decision making - Measurement methods

  • Host
    Eliza Avgeropoulou
About this session

About this session

This webinar is the fourth session of the webinar series “From Theory of Change to database design for evidence-based decision making”. It is a one-hour session ideal for Monitoring and Evaluation professionals or Program Managers who are interested in learning more about various measurement methods. We base the session on a specific scenario to make the presentation easier to follow.

In summary, we explore:

  • Different types of measurement
  • Best practices for choosing appropriate measurement methods
  • Display practical examples in ActivityInfo (tracking outputs, beneficiary progress and surveys)

View the presentation slides of the Webinar.

Other parts of this series

Other parts of this series

The Monitoring and Evaluation webinar series “From Theory of Change to database design for evidence-based decision making” is a series of five live sessions addressed to M&E professionals working in humanitarian or development operations.

These webinars comprise a course which will help you get a comprehensive understanding of all the steps involved in moving from a Theory of Change to a functional MEAL system. Each session will focus on a particular aspect of this path including: Theory of Change, Results Framework and LogFrame, Indicators, Measurement Methods and developing a MEAL plan as well as database design.

It is highly recommended that you join or watch the recordings of all webinars in their consecutive order so as to benefit from the complete course.

Is this Webinar for me?

  • Are you an M&E practitioner or Program Manager who wishes to better understand measurement methods and their role in the path of building a MEAL system?
  • Are you responsible for leading M&E in your organization, or is that a role you would like to take on and you would like to get a deeper understanding of the tools that can facilitate your work?

Then, watch our webinar!

About the Trainer

About the Trainer

Ms Eliza Avgeropoulou earned her BSc from Athens University of Economics and Business, and her MSc degree in Economic Development and Growth from Lund University and Carlos III University, Madrid. She brings eight years of experience in M&E in international NGOs, including CARE, Innovations for Poverty Action and Catholic Relief Services (CRS). The past five years, she has led the MEAL system design for various multi-stakeholders’ projects focusing on education, livelihoods, protection and cash. She believes that evidence-based decision making is the core of high quality program implementation. She now joins us as our M&E Implementation Specialist, bringing together her experience on the ground and passion for data-driven decision making to help our customers achieve success with ActivityInfo.

Transcript

Transcript

00:00:00 Introduction and recap

Thank you for the nice introduction. As mentioned, this is the fourth webinar of our series. Just a quick recap for those of you who didn't have the chance or the time to watch the previous webinars. We started with the Theory of Change. In the first webinar, we analyzed how the Theory of Change is the first step of the system design, how we can develop it, and best practices.

Then, we moved towards the second webinar, which was the Results Framework and Logical Framework. We covered best practices around their development along with key steps on how we can approach those two tools. On the previous webinar, we focused on the indicators, which is a crucial component of any MEAL system design, as this is the heart that enables evidence-based decision making.

00:01:10 Agenda

Today, we are going to go through the measurement methods. First, we're going to do a quick recap of why we chose to focus on the indicators and the measurements specifically, and some key messages around the indicators, because they are strongly associated with the measurements that we are choosing.

Then we're going to dive into the measurement methods, understand how they are defined, and how they are associated with indicators, whether quantitative or qualitative. Finally, we will see how ActivityInfo technologies can enable the implementation of measurement methods. We're going to demonstrate an example by using ActivityInfo based on the scenario mentioned in the introduction.

00:02:11 The importance of indicators

We chose to focus specifically on indicators because indicators ensure evidence-based decision making. By having timely information, we actively support adaptive management. In practice, this means that if I have access to real-time information, I can understand relatively easily what is working well and what does not work well within my project and change it.

We support learning as, through this process of adapting and changing, we enable the learning environment. Last but not least, we support accountability. We hold information that we share with relevant stakeholders, such as donors, implementing partners, or beneficiaries. Measurement methods enable the whole process of evidence-based decision making, as they enable the data collection process for indicators.

00:04:42 Measurement methods in the Logical Framework

Since performance indicators and measurement methods are part of the Logical Framework, it is important to see how they fit. The Logical Framework is a matrix which is the basis of the MEAL plan. It includes statements of outcomes from the Results Framework, the indicators, the measurement methods, and the critical assumptions.

The consistent development of this framework—thinking through objectives, indicators, and appropriate methods—is a crucial step to MEAL system design. Evaluation is associated with the upper level of a Results Framework, whereas monitoring is associated with the lower level. Learning and accountability are evident throughout the process. It is strongly recommended that this process starts early, ideally when we develop our proposal.

00:07:11 Key messages regarding indicators

There are some key messages we need to remember regarding indicators, as they dictate the measurement methods we choose. Indicators can be quantitative or qualitative. Quantitative indicators help us understand how much of something is happening (numbers, ratios, percentages). Qualitative indicators help us investigate the "why" and the "how." While qualitative indicators are critical to inform adaptive management, we frequently see a mixed methods approach where qualitative indicators accompany quantitative ones.

The next component is the quality of an indicator, which we check using the SMART criteria:

00:09:44 Understanding measurement methods

The measurement method identifies how the project will gather the necessary data to track the indicator. Quantitative methods enable us to collect information regarding what can be counted. They measure quantities and enable comparison across time or different groups. Examples include questionnaires, structured observations, and achievement tests.

Qualitative methods are most appropriate to identify why and how something is happening. They help capture participants' experiences using words, stories, or pictures, and frequently trigger discussions. Examples include focus group discussions, observation, and semi-structured interviews.

The choice between quantitative and qualitative depends on the purpose of the data collection, the indicator defined, and the context (budget, personnel, time, expertise). Quantitative methods allow for processing results from a large number of subjects and generalizing results, but they cannot answer the "how" and "why." Qualitative methods provide depth and detailed descriptions, allowing us to explore perspectives and unexpected factors, but they cannot be generalized to a broader population and are harder to analyze.

00:15:42 Choosing the appropriate method

The decision on which method to use depends on contextual factors and a realistic approach regarding resources. Data collection activities can be expensive and consume a significant portion of projects. We must weigh the trade-off in terms of effort and cost against the value of the information collected.

For example, if we need a survey, we must decide between a representative sample (high cost, time, and effort) or a less time-consuming alternative. If field teams already collect specific data for their daily work, it is worth using that information rather than conducting a new, expensive survey. Observations or existing records (secondary data) are usually associated with less cost and effort.

Triangulation is also valuable, where we combine information from different sources about the same indicator to cross-validate. A frequent example is satisfaction surveys using a Likert scale (quantitative) combined with open-ended questions asking "why" (qualitative).

00:20:51 Practical scenario: Refugee integration project

Let's look at an example using the "Homeland" scenario from previous webinars. Homeland has received a large influx of refugees, and we are designing a project for refugee integration. The programming team needs dedicated methods to get real-time information.

We have a Results Framework with a Strategic Objective related to legal livelihood opportunities. For the indicator "Percentage of people represented at least once," we might face a dilemma between a sample survey or using administrative data. If the programming team collects this data for implementation, we should use that instead of adding costs.

For Intermediate Result 1 regarding basic needs and medical access, we might need beneficiary feedback. We must decide if we need a survey, who collects it, and how often. For Output 1.1 regarding medical referrals, the team might track the "percentage of referrals conducted within a day." If social workers already track this in a list, we have access to the information without additional data collection.

For Intermediate Result 2 regarding skills and knowledge, we might use a standard indicator about whether vocational training enabled skill acquisition. This might require an anonymous survey right after training.

00:32:29 ActivityInfo demonstration

We can build a system using ActivityInfo to manage this information efficiently. Instead of having information sitting in Excel files or Google Drives across different teams, we consolidate it.

The system design includes:

In ActivityInfo, we can create a form for beneficiary registration capturing demographics. Then, we can add sub-forms or related forms for employment records (tracking job start dates, types), referrals (tracking dates to calculate if it was within a day), and training attendance. For surveys, we can generate a collection link to share with beneficiaries for anonymous feedback, or link it directly to the beneficiary record if it is non-anonymous. This setup allows for real-time data access and reduces duplication of effort.

00:42:21 Case study and key takeaways

A case study example is KnK Pakistan, which moved from paper-based to mobile data collection. This transition allowed them to analyze data faster, spend less time on collection, and access real-time data.

Key Messages:

00:45:11 Q&A session

How many measurement methods are needed to measure an indicator? Usually, one is sufficient, or a combination of quantitative and qualitative. For example, using a survey (quantitative) combined with qualitative aspects like observations or open-ended questions. It depends on the resources available.

Is it ideal to use percentage as a unit of measurement for outcome indicators? It depends on the logic of the framework. You need to perform a reality check to see if the percentage provides the information you need and if it is easy to capture. If a simple number suffices, you might not need a percentage.

Can qualitative methods be used for percentage indicators? Percentage indicators are derived from quantitative methods. However, you can use a mixed-methods approach where a tool collects quantitative data (to calculate the percentage) and also asks open-ended qualitative questions.

How to measure understanding in a large webinar series (800+ participants)? It depends on what information is important. If satisfaction is enough, a post-webinar survey works. If you need deep understanding, you might need a sample for focus groups. Pre- and post-tests are difficult if the participants change between sessions. Always choose the "good enough" method that provides valid results within your resources.

Why do donors require quantitative indicators? Quantitative indicators (numbers, percentages) are often viewed as providing a better picture for accountability—justifying funding and tracking "how many" or "how much." However, mixed methods are becoming more common to explain the "why."

What is the difference between anonymous and non-anonymous surveys? In non-anonymous surveys, you know the respondent, which introduces bias if the enumerator is part of the implementation team. Anonymous surveys can reduce this bias and encourage honest feedback, but you run the risk of duplicate responses.

How do we measure the impact of an organization? This starts with the organizational objectives. You can aggregate data collected by different programs or conduct specific organizational-level data collection, depending on the value of the information versus the cost.

How to frame indicators for preventing violent ideologies (PVE)? This is sensitive. You might need to define the outcome through open discussions or focus groups first to understand perceptions. Then, formulate indicators that might use a mix of quantitative data and qualitative methods (like key informant interviews) to capture behavioral changes and perceptions confidentially.

What is the difference between measurement methods, data sources, and monitoring tools? They often coincide. The measurement method is how you collect (e.g., survey). The data source/means of verification is where the info is (e.g., the completed questionnaire). The monitoring tool is the specific instrument used (e.g., the survey form or focus group guide).

Can we use triangulation for a single indicator? Yes, triangulation is a great way to validate data. For example, combining a survey with a review of public records or administrative data is a valid approach.

Sign up for our newsletter

Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.

Which topics are you interested in?
Please check at least one of the following to continue.