Part 1 of 2
Thursday September 29, 2022

Quality and use of evidence for M&E professionals and Program Managers

  • Host
    Eliza Avgeropoulou
About the webinar

About the webinar

This Webinar is the first of the two sessions on the topic of "Evidence based decision making for M&E professionals and Program Managers working in humanitarian or development operations". It is a one-hour session ideal for Monitoring and Evaluation professionals or Program Managers who are interested in learning more about the importance of reliable and quality evidence in the humanitarian or development sector and a useful introduction to the second part of the series "Best practices for use of evidence for M&E professionals and Program Managers" which will take place on October 20th, 2022.

During this session, we discuss:

  • Why does evidence matter?
  • What is evidence for the humanitarian and development sector?
  • Why does the quality of evidence matter and how can we determine the quality?
  • Why producing high quality evidence can be challenging?
  • Is evidence being used to guide decision-making?
  • How can quality and use of evidence be improved?

View the presentation slides of the Webinar.

Is this Webinar for me?

  • Are you an M&E practitioner or Program Manager who wishes to better understand the role of evidence in decision-making to improve the quality and impact of your work?
  • Are you responsible for leading M&E in your organization, or is that a role you would like to take on and you would like your practices to focus on reliable and quality evidence?

Then, watch our Webinar!

About the Trainer

About the Trainer

Ms Eliza Avgeropoulou earned her BSc from Athens University of Economics and Business, and her MSc degree in Economic Development and Growth from Lund University and Carlos III University, Madrid. She brings eight years of experience in M&E in international NGOs, including CARE, Innovations for Poverty Action and Catholic Relief Services (CRS). The past five years, she has led the MEAL system design for various multi-stakeholders’ projects focusing on education, livelihoods, protection and cash. She believes that evidence-based decision making is the core of high quality program implementation. She now joins us as our M&E Implementation Specialist, bringing together her experience on the ground and passion for data-driven decision making to help our customers achieve success with ActivityInfo.

Transcript

Transcript

00:00:00 Introduction

Welcome everyone and thanks for the great introduction. Given my passion for evidence-based decision-making, it goes without saying why I am doing this webinar series. This webinar is the first of two on the topic of evidence-based decision-making in the humanitarian and development sector. We looked into the subject deeply and decided to split it into two sessions. The purpose of this first one is to walk you through the basics from scratch regarding the quality and use of evidence. The second webinar will focus on hands-on experience and how we can incorporate best practices into the project cycle to actually use the evidence that everyone talks about.

The fundamental key here is to raise awareness about evidence and its quality. Given the circumstances under which the humanitarian and development sectors operate, we need to be conscious about our choices and when we are ready to accept lower quality evidence, as well as how this may affect our decisions. There has been significant improvement in the past years in how we use evidence, but there is always room for improvement.

Today, we will start from scratch by defining what evidence is and why it matters. We will look at quality criteria to determine if we can use our evidence, based on criteria developed during an ALNAP annual meeting. We will then discuss why producing high-quality evidence is challenging, being realistic about the context we work in. We will examine whether evidence is actually being used to guide decision-making, looking at both positive and negative examples. Finally, we will cover some principles on how the quality and use of evidence can be improved, which will serve as a bridge to the next webinar.

00:03:21 What is evidence?

Evidence has been defined multiple times. During the 20th and 21st centuries, we have acknowledged that there are complex pathways between cause and effect, or intervention and impact. This helps us rethink the definition of evidence, rejecting the linear model of cause and effect because, in realistic situations, a linear model rarely exists. We also challenge the idea of strict objectivity and the power of the absolute observer, putting emphasis on people's experiences and perceptions, which is the core of what evidence is and how we can use it.

It is important to differentiate key terminology. First, we have knowledge, which is held in some form of fact. Then we gather data, which is raw, unorganized information (qualitative or quantitative). We process this data to reach information, meaning we see specific patterns that have meaning. Finally, we use this information related to a specific proposition to challenge or prove something; this is the core definition of evidence.

In many cases, these propositions relate to the existence of a condition, such as malnutrition, which can be verified through repeated, systematic observation. In other cases, propositions relate to the behavior or beliefs of groups of people. We need to recognize that a single objective reality does not exist; there are propositions that relate to multiple and conflicting ideas. This requires us to investigate the value of a variety of different types of information as evidence and use various methodologies to consider the value of this information in supporting the proposition.

For example, the International Initiative for Impact Evaluation (3ie) published a case study on addressing intimate partner violence with food security and recovery in Ecuador. The intervention by WFP involved cash, vouchers, and food distribution. The evaluation found that transfers reduced controlling behavior and violence, suggesting that reductions in intimate partner violence were due to improvements in bargaining power, decreased poverty, and food vouchers acting as protective factors. This evidence informed the design of food security and nutrition in Ecuador and WFP's country strategy plan.

00:10:24 Why evidence matters

High-quality and reliable evidence is central to humanitarian and development action. Evidence should inform program choice, program design, policy decisions, and the strategic direction of organizations. While there have been improvements in how organizations strengthen systems for organizational learning and knowledge management, there is still room for growth. The key elements of why evidence matters narrow down to effectiveness, accountability, and ethics.

First, we want to be effective for the people we serve. The organizational ability to collect, analyze, and disseminate information is fundamental to an effective response. MEAL systems are often developed so we can quickly use generated information to improve implementation. Second, we are accountable to donors, governments, other organizations, and beneficiaries. Organizations must prove that a need exists, demonstrate informed choices about the most effective response, and provide evidence on the impact of those choices.

Third, regarding ethics, Peter Walker noted that if you believe in impartiality, you have to be evidence-based; you cannot be impartial if you don't know what the range of choices are. Evidence is essential in promoting the critical reflection required to challenge established narratives, biases, and preconceptions, enabling learning. Failure to generate and use evidence makes humanitarian and development action less effective. In the long run, evidence feeds into efforts to strengthen our credibility with donors, partners, and the target population.

00:14:34 Quality of evidence and ALNAP criteria

The quality of evidence refers to the extent to which information related to a specific proposition can be trusted and used to challenge or support that proposition. Different qualities of evidence are not necessarily related to the nature of the evidence (quantitative or qualitative), and different research designs are more or less appropriate for different contexts. To determine quality, we can look at the ALNAP criteria.

The first criterion is accuracy: is it a good reflection of the real situation? For example, have anthropometric measurements been correctly conducted? The second is representativeness: do we have an illustration of the conditions of the larger group? Information from one village does not necessarily represent all targeted villages. The third is relevance: does the information relate to the propositions? Anthropometric measures may provide evidence for nutritional status but may be less effective for proving food scarcity.

The fourth criterion is generalizability: the extent to which we can generalize information from a specific situation. While we may need to generalize results for policy, a specific assessment in one country may not apply to another. The fifth is attribution: is there a clear association between cause and effect? This is core to evaluations. The sixth, which is the basis of everything else, is clarity around context and methods. Without context and clear methodology, it is hard to determine whether evidence is relevant or reliable.

00:21:06 Challenges to generating high-quality evidence

We need to be realistic about what constitutes high-quality evidence given the threats to quality. One challenge is the lack of standard approaches and definitions, though there is a significant effort to create global indicators. Another is the choice of measurement methods, which concerns representativeness and accuracy. We must be conscious of why we choose qualitative or quantitative methods and use academically recognized ways for data collection.

Availability of information is a common constraint. Access may be limited by governments, the environment may evolve quickly (especially in emergencies), or there may be movement restrictions. Time is also a limiting factor; in emergencies, we need to act fast. We must also be conscious of existing power dynamics, which can pose threats to quality through bias. For example, beneficiaries may claim satisfaction because of the power dynamic with the organization providing services.

Evaluations face specific constraints, such as the complexity of determining the pathway between cause and effect, lack of capacity or expertise, the fear of publishing failures, and technical difficulties in establishing baselines and control groups. High costs are also a factor, creating a trade-off between spending on data collection versus program implementation. Finally, technology raises questions about the credibility of sources and whether we are excluding groups that lack access to technology.

00:27:53 Is evidence being used?

Quality of evidence does not guarantee its use. We have different sets of decision-makers: agency staff use evidence for program design, while donors use it to determine funding legitimacy. Only a minority of evaluations are effective at triggering changes or improvements in performance. Constraints to using evidence include the fact that M&E activities are often conducted by the same organization using the results, which can create skepticism.

The context of decision-making plays a role; often, interventions are predetermined by organizational capacity rather than a blank slate. Power dynamics, donor preferences, and political considerations also influence decisions, sometimes overriding evidence of need. Time is crucial; information that cannot be accessed in time becomes useless. For example, fast adaptive responses relying on real-time information were crucial in the Ukraine crisis, whereas a failure to anticipate the scale of displacement in Syria led to a refugee crisis.

Using ICT tools like ActivityInfo can help mitigate time constraints. For instance, the Rapid Response Mechanism (RRM) initiative used ActivityInfo for weekly updates, which would have been impossible otherwise. In Afghanistan, the shift from paper-based to information management systems for the COVID-19 response saved time for reporting and decision-making.

Another constraint is the communication between M&E practitioners and program managers. We need to ask who is driving the data collection. Ideally, M&E teams provide technical expertise while program management provides context. Unconscious bias also affects use; working under pressure leads to shortcuts and assumptions. Our past experiences affect how we interpret information.

There are examples where evidence has shifted policy, such as the increase in cash-based programs after evaluations in the 2000s proved their effectiveness. Conversely, evaluations of the Rwanda genocide (1994) and the Tsunami response (2004) repeatedly highlighted a lack of understanding of local context and beneficiary views, yet similar findings appeared in reports years later, suggesting that evidence is not always used to change practices.

00:40:27 Principles for improvement

To improve the quality and use of evidence, we should use robust methods for analysis and collection, learning from researchers and bibliography. There should be proportionate investment, ensuring we collect "need to know" rather than "nice to know" information. If information is not being used, we should not have collected it.

We need increased collaboration with stakeholders, both internally and externally, to mitigate duplication of efforts. Finally, we must think long-term. The systems we build, the definitions we use, and the tools we have in place should include the knowledge of the people affected, as they often know best what they need.

00:42:17 Q&A Session

Francis asked for an explanation of criterion five (Attribution). Attribution refers to the extent to which we have a clear association between cause and effect, or intervention and impact. In organizational terms, it is determining the extent to which the program reaches its objectives and impacts the beneficiary. This is usually analyzed at the high level of the results framework (goal and strategic objective).

Peter asked how to avoid biases when conducting M&E for the same organization with limited funds. Being realistic about funds is important. When building the MEAL system or collecting data, the person leading the process should identify key perspectives to triangulate perceptions. This means including key perspectives from other organizations or stakeholders. If working for an INGO, including someone outside the local context, like a regional advisor, can provide a reality check and a more objective perspective.

Clement asked about factors distorting data quality by M&E practitioners and mitigation. It starts with the "why" of data collection. The M&E practitioner plays a huge role, but people who know the context (PMs, stakeholders) should be involved. If the "why" is solid, it defines the methods and analysis. If the interpretation of findings is participatory—involving other perspectives—you can generate robust evidence.

Jennifer asked how ALNAP criteria relate to USAID standards (validity, reliability, integrity, precision, timeliness). There isn't a one-to-one association, but they cover similar ground. Validity relates to accuracy and relevance. Timeliness relates to relevance and the ability to use the information. Different sets of criteria describe the same goal: reaching a situation where we have high-quality evidence we can trust.

Muhammad asked about improving M&E during the pre-project formulation phase. For the pre-project or assessment phase, you can use evidence from previous projects in the same context. If that isn't available, you rely on secondary data and assessments. A mixed-method approach combining primary and secondary data is often used. Planning the MEAL plan before the proposal or during the assessment helps you think long-term about gathering information before, during, and after the project.

A participant asked how M&E units can win good perception from program managers. The key words are interpretation and reflection. M&E units see data patterns and need to communicate them. To bridge the gap, M&E and program teams should gather in the same room so PMs and field staff can help interpret the data. This builds an in-house culture of collaboration.

Goddy asked how to convince managers to use data and avoid collecting unused data. It takes time, sometimes a year or more. The method is to have them actively use it by posing questions in program meetings. Present the data patterns and ask if it confirms their experience or what other information they need. Gradually, they become more active in data use.

A participant asked about implementing accountability and ethics in M&E. Accountability is one of the goals of M&E. For example, collecting beneficiary feedback satisfies the loop of accountability to the people we serve. Ethics is enabled by a transparent M&E system. If the system is well-designed and transparent to all stakeholders, it inherently supports ethical practices.

Vanessa asked about confidentiality and collaboration. Confidentiality tracks back to the MEAL system design—collecting only "need to have" information protects data by minimizing what is stored. We must ensure legal backing and beneficiary consent (explaining why we collect data and how long we keep it). Practically, we should avoid sending Excel files via email and instead use Information Management Systems to restrict access based on roles, ensuring we don't share everything with everyone.

Sign up for our newsletter

Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.

Which topics are you interested in?
Please check at least one of the following to continue.