Part 1 of 3
Thursday May 11, 2023

Getting started with evaluation in humanitarian assistance

  • Host
    Eliza Avgeropoulou
About this session

About this session

This is the first session of the series “Evaluation in humanitarian assistance”. It is a one-hour session ideal for Monitoring and Evaluation or other professionals who are interested in understanding the basics of evaluation in humanitarian assistance.

In summary, we explore:

What is the evaluation of humanitarian action?

  • What is different about EHA?
  • A brief history of EHA
  • How does monitoring associate with evaluation?
  • What are the key challenges in EHA?

Deciding to perform an evaluation

  • What is the purpose of evaluation?

  • How to decide when and what to evaluate?

  • What are the ethics considerations of evaluation?

  • Case studies: Real time evaluation

View the presentation slides of the Webinar.

Is this Webinar for me?

  • Are you an entry/intermediate level M&E or IM practitioner who wishes to better understand the steps included in an evaluation for humanitarian assistance and are you looking for an introduction to this topic?
  • Are you assisting in evaluations in your organization, or is that a role you would like to take on and you would like to get a deeper understanding that can facilitate your work?

Then, watch our webinar!

Other parts of this series

Other parts of this series

The Monitoring and Evaluation webinar series “Evaluation in humanitarian assistance” is a series of three live sessions addressed to M&E professionals working in humanitarian operations. These webinars comprise a course which will help you get a comprehensive understanding of all the steps involved in evaluation in humanitarian assistance including: introduction, planning and design and implementation.

The series is addressed to entry-intermediate level professionals and it is highly recommended that you join or watch the recordings of all webinars in their consecutive order so as to benefit from the complete course.

About the Trainer

About the Trainer

Ms Eliza Avgeropoulou earned her BSc from Athens University of Economics and Business, and her MSc degree in Economic Development and Growth from Lund University and Carlos III University, Madrid. She brings eight years of experience in M&E in international NGOs, including CARE, Innovations for Poverty Action and Catholic Relief Services (CRS). The past five years, she has led the MEAL system design for various multi-stakeholders’ projects focusing on education, livelihoods, protection and cash. She believes that evidence-based decision making is the core of high quality program implementation. She now joins us as our M&E Implementation Specialist, bringing together her experience on the ground and passion for data-driven decision making to help our customers achieve success with ActivityInfo.

Transcript

Transcript

00:00:00 Introduction

Hello and welcome everyone. This is the first webinar of our series. In this session, we aim to provide an overview of what evaluation is in humanitarian action. We will discuss how we make the decision on whether to perform an evaluation, as well as when and what we are going to evaluate.

In the second webinar, we will dig deeper into how we plan and design an evaluation. In the third webinar, we will go over the steps of how we can implement an evaluation in humanitarian assistance.

00:00:59 Session outline

Today, we will explain what evaluation in humanitarian action is and the characteristics that make it special. We will look at how monitoring associates with evaluation and go through a brief history of the field to understand the context and how it is evolving. We will also discuss key challenges and predictions for the future of evaluation in humanitarian action.

In the second section, we will see the steps on how to decide to perform an evaluation. We will look at the key questions we ask ourselves when facing the dilemma of whether to perform an evaluation or not. We will discuss how to decide when and what to evaluate, and the ethical considerations that impact our decision. Finally, we will look at a real-time evaluation case study conducted by the IFRC in the Philippines in 2014, followed by a Q&A session.

00:02:30 What is evaluation in humanitarian action?

The incentive for this webinar arises from the increasing demand for evaluation in humanitarian action. This demand has increased because there is an escalating number of humanitarian crises around the globe. A third of all countries in the world are currently managing one or multiple emergencies requiring humanitarian action. There is also a growing demand for accountability. Stakeholders, including donors and the population, need to be assured that their investments are used efficiently, uphold ethical principles such as "Do No Harm," and make a difference for the population we support.

Broadly speaking, the OECD definition refers to evaluation as the systematic and objective assessment of an ongoing or completed program. We do this to determine the relevance, fulfillment of objectives, effectiveness, impact, and sustainability of the programs or policies we implement. The primary objective is to provide credible and useful information to draw lessons learned that can be incorporated into the decision-making of both recipients and donors.

Systematic refers to a planned and consistent approach based on credible methods. Objective means taking a step back from the immediacy of the action to get perspective based on credible evidence. Assessment is the exploration or analysis to determine the worth or significance of the action.

00:05:52 Characteristics of humanitarian action

The difference in evaluating humanitarian action lies in the specific characteristics of the action itself. Humanitarian action includes both assistance and protection while maintaining human dignity. The parameters have expanded from simply saving lives to saving livelihoods. It includes responding to crises, supporting preparedness and disaster risk reduction before the crisis, as well as recovery and rehabilitation afterwards.

In conflicts and protracted crises, it is often unclear when the emergency ends and recovery begins; both types of support are often needed simultaneously. Humanitarian action must be guided by the principles of humanity, neutrality, impartiality, and independence. These distinguish humanitarian action from political or military activities and are crucial for acceptance by actors on the ground. The "Do No Harm" principle is paramount.

These definitions impact the decision to perform an evaluation and its scope. Regarding the decision, there are ethical considerations, particularly in conflict and insecure settings. We must consider how engaging in the evaluation might affect those taking part. Regarding the scope, we must decide if we are focusing on the immediate response, preparedness, or recovery.

00:09:18 Purpose and the link with monitoring

Why do we perform an evaluation? Learning and accountability are two crucial purposes. Learning is the process through which we reflect on experiences to change behavior and improve programs. Accountability is the process of taking into account the views of all relevant stakeholders, primarily the affected people and donors.

Monitoring and evaluation are often treated as separate boxes, but they are complementary tools. Monitoring refers to systematic data collection to provide an indication of progress towards objectives and the use of funds. Evaluation is usually a one-off activity at key points in the project cycle, focusing on future responses. If an intervention has not been properly monitored, it is challenging to perform an evaluation.

For example, in a cash transfer project, monitoring tracks how many people received money and how much (outputs). Evaluation might capture the consequences of providing transfers to women or the effects on the local market (outcomes and impact). If we do not know the basic monitoring data, it is hard to determine the scope of an evaluation or answer higher-level questions.

00:13:37 A brief history of evaluation

Evaluation is a fairly new practice relative to the history of humanitarian aid. While aid started long ago, humanitarian evaluation is about 25 years old. It has become an established practice, with organizations like OCHA institutionalizing the function in 2002. The practice has become more professional with the creation of standards and principles, such as the OECD DAC criteria refined in 2010.

The community has agreed on frameworks like the Core Humanitarian Standard (CHS) in 2014 and the 2030 Agenda for Sustainable Development. Capacity has increased through publications and guidelines, such as those from ALNAP. We also see a proliferation of policy evaluations, joint humanitarian evaluations (like the Tsunami response in 2006), and real-time evaluations. Meta-analyses, such as ALNAP's State of the Humanitarian System reports, have also risen in prominence.

00:17:20 Key challenges and future predictions

There are several challenges linked to the nature of humanitarian action.

Looking to the future, we predict an increase in remote evaluation approaches and technology-based approaches (ICT), a trend accelerated by COVID-19. There is a push for developing the evaluation capacity of local partners to localize evaluation, which improves community engagement. We also hope to see a move towards a stronger culture of learning, where organizations share lessons learned across the sector.

00:26:02 Deciding to perform an evaluation

Evaluation comes at a cost. Money invested in evaluation is only well spent if it leads to improvements in humanitarian action. This requires acting upon findings. When deciding to evaluate for accountability, we must ask: accountability to whom and for what?

For accountability to affected populations, alternatives to a full evaluation might include ongoing consultation via feedback and complaint mechanisms. If these are monitored properly, they may provide more timely answers than an evaluation. For technical accountability, reviews might be sufficient.

For learning-oriented evaluations, the goal is to facilitate organizational learning. These can happen at any time, often focusing on the initial implementation phase. Examples include After Action Reviews or the Most Significant Change technique.

Balancing accountability and learning is challenging. Accountability evaluations emphasize objectivity and independence, often with an investigative style. Learning evaluations require a safe psychological environment where staff can admit mistakes. We must prioritize the primary purpose.

00:36:01 When to evaluate and evaluability assessments

We must ask how the evaluation adds value and if we have the capacity to absorb it. It is appropriate to evaluate programs with unknown or disputed outcomes, pilot programs testing new ideas, large and expensive interventions, or when required by mandate or donors. It is inappropriate when it is unlikely to add new knowledge or when security issues prevent safe access or reliable data collection.

An evaluability assessment is a useful tool to decide whether to proceed. It involves defining the scope, engaging stakeholders to review logic and data availability, and assessing the conduciveness of the context (ethics, logistics, security). It ensures the evaluation can answer the proposed questions.

00:41:00 Ethical considerations

Ethical considerations affect the decision to evaluate. The "Do No Harm" principle has two perspectives. The humanitarian perspective is about avoiding exposing people to further harm (violence or physical hazards). The conflict sensitivity perspective ensures the intervention does not contribute to the conflict and promotes peace where possible.

Evaluators must be aware of how the process can exacerbate tension, raise unrealistic expectations of aid, trigger heated discussions, or create perceptions of bias. To conduct evaluation in a conflict-sensitive manner, we must assess if the process contributes to tension, conduct conflict analysis, and revise plans accordingly.

00:44:30 Case study: Real-time evaluation

We will look at a real-time evaluation of the IFRC response to Typhoon Haiyan (Yolanda) in the Philippines. The impact was massive, affecting 3 million families. IFRC commissioned a real-time evaluation to improve service delivery, accountability, and to build lessons learned.

The evaluation focused on relevance and effectiveness, coordination with partners, and humanitarian diplomacy tools. It was performed in early 2014 using a mix of external evaluators and internal staff. The methodology included secondary data review, observations, interviews, focus groups, and a workshop.

One key learning outcome was a recommendation to establish a standard organizational structure for major global responses, as findings indicated funds were not used as expeditiously as possible. IFRC created an action plan with specific timeframes and responsibilities.

Constraints included the timing (transitioning from relief to recovery), staff turnover (key staff had left), reliance on phone interviews, and potential bias from internal staff.

00:50:32 Key messages

00:52:16 Q&A session

How can an M&E system measure achievements in unpredictable contexts like floods? In emergencies, there is unpredictability. We can use data from similar contexts and standard indicators like the Sphere Handbook. If direct access is not possible, we can use proxy indicators. We must balance the fast-paced environment with the need for information to inform decision-making.

How do we conduct impact evaluation for people affected by crisis during different phases? There are specific resources and toolboxes available for this, which we can look into for future sessions. Generally, it involves looking at available guidance for segregated tools.

Is there a fundamental difference between evaluation in humanitarian vs. development contexts? The main difference is the context and the challenges. Development projects are more stable, long-term, and allow more time for data collection and access to participants. Humanitarian contexts are fast-paced with security concerns and high staff turnover. However, the evaluation designs often follow similar principles, though humanitarian evaluations may need to be more creative or use remote methods.

How do we address the Core Humanitarian Standard (CHS) through evaluation? The CHS are recognized standards. Evaluating them often involves combining surveys of internal staff and the affected population, followed by workshops to interpret results and create action plans.

Can an M&E officer perform an evaluation, or must it be external? Evaluations seeking to determine cause and effect usually require independence to reduce bias. External evaluators are not involved in implementation and have no psychological connection to the program, allowing for a more objective view. However, mixed teams can be valuable to provide context.

When do you draw the line on evaluation costs? Evaluation is generally reserved for large-scale interventions or pilot programs with significant unknowns. For small-scale projects, unless it is a pilot testing a new idea, the cost of a full evaluation might not be justified compared to the project budget.

What is the difference between a review and an evaluation? Reviews (like midterm reviews) often focus on what worked well or not at a micro/output level to adjust implementation. Evaluations, especially at the end, look broader at outcomes and impacts, such as market effects of a cash transfer. Both should result in an action plan.

How do we use findings for learning? The best way is to hold a workshop with stakeholders to interpret findings and immediately create an action plan with timeframes and responsibilities.

How do we address bias? The first step is using an external team. If using a mixed team, balance the need for local perspective with the risk of bias.

What organizational structure is best for monitoring? Both integrated and parallel structures exist. A separate M&E team often has more time and specific expertise, but they must communicate frequently with the program team.

Does an independent evaluation prevent data quality control? No. The organization should have a steering committee to monitor the external evaluator and ensure data quality. The evaluator needs internal support and information to function.

Sign up for our newsletter

Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.

Which topics are you interested in?
Please check at least one of the following to continue.