Thursday March 18, 2021

Best practices for inclusive Monitoring and Evaluation in data collection systems

  • Host
    Fay Candiliari
  • Panelist
    Naomi Falkenburg
  • Panelist
    Alexander Bertram
About the webinar

About the webinar

This Webinar is a one-hour session ideal for Monitoring and Evaluation professionals who are interested in inclusive M&E activities and data collection.

During the session, we discuss with the two speakers, Ms. Naomi Falkenburg and Mr. Alex Bertram, the following questions:

  • What is inclusion and how can we capture inclusion concerns in the baseline to support inclusion in practice?
  • What are inclusive indicators? Why would we use them, and how?
  • How does an information system actually help reach our goals of being more inclusive in our programming? Does it have a real impact for the people on the ground?
  • What are the risks of not disaggregating data?
  • What kind of categories of disaggregated data should we keep in mind, and how should we decide which to use?
  • What can we do when not all partners have breakdowns available?
  • How can an information system work with qualitative data as well as quantitative data?
  • What is the value of affected communities participating in M&E? How can we encourage participation? .

Please take a look at this Guide on inclusive M&E data collection systems for further reading. You can also access the French version of this Guide.

View the presentation slides of the webinar

Voir les diapositives de présentation du webinaire en français

Is this Webinar for me?

  • Are you an M&E practitioner interested in inclusion?
  • Are you responsible for leading M&E in your organization, or is that a role you would like to take on and you would like your practices to focus on inclusion?
  • Do you want to understand better the importance of disaggregated data in M&E data collection and how you can support the beneficiaries’ participation?
About the Speakers

About the Speakers

Ms. Naomi Falkenburg is an independent consultant who works with humanitarian and development actors to design, manage and learn from their interventions and to conduct insightful research and analyses. Prior to becoming a consultant, Naomi worked at several UN agencies and international NGOs in West Africa, Europe, and East Asia, on themes such as gender equality, disability inclusion, migration and forced displacement, decent work and skills development for youth, and digital inclusion.

Mr. Alexander Bertram, Technical Director of BeDataDriven and founder of ActivityInfo, is a graduate of the American University's School of International Service and started his career in international assistance fifteen years ago working with IOM in Kunduz, Afghanistan and later worked as an Information Management officer with UNICEF in DR Congo. With UNICEF, frustrated with the time required to build data collection systems for each new programme, he worked on the team that developed ActivityInfo, a simplified platform for M&E data collection. In 2010, he left UNICEF to start BeDataDriven and develop ActivityInfo full time. Since then, he has worked with organizations in more than 50 countries to deploy ActivityInfo for monitoring & evaluation.

Transcript

Transcript

00:00:00 Introduction

Hello everyone and welcome to today's webinar, "Best practices for inclusive monitoring and evaluation in data collection systems." My name is Fieke and I'm here together with Ms. Naomi Falkenburg and Mr. Alex Bertram. We will be hosting this webinar broadcasting from the Hague in the Netherlands and France. We're excited to see such a big interest in this topic; more than 1,100 people have registered to this webinar from all over the world. So thank you very much for joining us.

Before we start, I would like to share some housekeeping rules for everyone. Your microphone is muted and you should all be able to see the shared screen during the webinar. You can add your questions and comments in the chat box and select the organizer and panelists so we can see them. In the end, we will answer as many questions as possible and will keep in mind other questions to which we may reply with new articles, guides, or webinars. So, make sure to keep an eye on the ActivityInfo social media and the blog on our website. The webinar is being recorded and you will receive the recording after the webinar.

I would like now to introduce to you Ms. Naomi Falkenburg and Mr. Alex Bertram, who will be joining our discussion today. Naomi is an independent consultant who works with humanitarian and development actors to help them design, manage and learn from their interventions and to conduct insightful research and analysis. Prior to becoming a consultant, Naomi worked for several UN agencies and international NGOs in West Africa, Europe and East Asia on themes such as gender equality, disability inclusion, migration and forced displacement, decent work, and skills development for youth, and digital inclusion.

I would like now to introduce our second speaker, Mr. Alex Bertram. Alex is a graduate of the American University's School of International Service and started his career in international assistance 15 years ago, working with IOM in Kunduz, Afghanistan, and then later as an information management officer with UNICEF in DR Congo. With UNICEF, frustrated with the time required to build data collection systems for each new programme, he worked on the team that developed ActivityInfo, a simplified platform for M&E data collection. In 2010, he left UNICEF to start BeDataDriven and develop ActivityInfo full-time. Since then, he has worked with organizations in more than 50 countries to deploy ActivityInfo for monitoring and evaluation.

00:03:35 What is inclusion?

Naomi Falkenburg: We see the term 'inclusion' being used a lot in development and humanitarian spheres. While it certainly always sounds very good, it's not always clear what it means and how a commitment to inclusion can be practically reflected in our M&E work. I think that in most people's minds, inclusion is most often associated with disability-inclusive and gender-inclusive approaches to programming, or with projects that specifically target persons with disabilities or women.

Considering these groups is definitely a very important part of being inclusive. Targeted interventions, whether they are focused on persons with disabilities, women, or some other historically disadvantaged group, will continue to be necessary in most contexts to overcome long-standing forms of inequality and disadvantage. Mainstreaming inclusion, however, goes a step further than focusing on just one group. When we mainstream inclusion, we recognize that people have multiple identities that impact whether or not they experience discrimination, marginalization, or exclusion, in what way and to what extent.

The concept of intersectionality is really key here, which describes overlapping or intersecting forms of discrimination, oppression, and inequality that are linked to particular identity markers or personal characteristics, such as race, class, and gender. These forms of inequality emerge as structural advantages or disadvantages that can shape a person's or group's experience and social opportunities. So someone is not just a woman or a person with a disability. The experience of a 70-year-old woman who is an asylum seeker will be different to a 30-year-old woman who has a settled legal status. The experience of a boy with an intellectual disability who is living in an urban environment will be different to that of a boy who uses a wheelchair and lives in a rural area.

If we focus on just one identity, we might overlook not only other people who might also be marginalized or at risk of exclusion in our project's environment, but we might also miss very important aspects of a person's identity that may complicate our ability to reach them and to ensure that they fully benefit from our intervention. It can also lead to unintentional negative impacts on affected populations. Mainstreaming inclusion therefore means ensuring that all groups, including those who are marginalized or vulnerable, are visible and meaningfully represented in our programming. It also means that we take steps to identify and reduce the barriers to people's representation and participation. When we mainstream inclusion and have a targeted focus on a particular group at the same time, we follow what is called a twin-track approach.

00:07:22 Capturing inclusion concerns in the baseline

Naomi Falkenburg: Having a good understanding of the context that we are working in and the people that we work with is the starting point for planning and implementing an inclusive intervention. Some of the reasons why baseline data is so important are that it gives us a sense of the conditions before our project starts, including any assumptions that we might have about why our project is going to work. It allows us to fine-tune aspects of our M&E system, like our indicators, and provides a reference point to set achievable and realistic targets for these indicators. Baseline data allows us to monitor progress and change so that project implementation can be adjusted if necessary, and it helps us evaluate the impact of an intervention when it is over.

Because collecting baseline data is typically the first M&E activity that we carry out, it can have a lot of influence over subsequent M&E activities and our intervention more generally. It therefore needs to capture inclusion concerns as much as possible, which means considering what forms of discrimination, marginalization, and/or vulnerability are present in the project environment and who is impacted. Ideally, we would do a baseline study after we've already given our M&E plans some thought and before the project starts. However, sometimes we need to go back and reconstruct our baseline data, or rely on other types of assessments.

There are a few things that we can do to make sure that inclusion concerns are captured in our baseline. First, we need to think carefully about representation in the information being collected. I mean this both with regards to data disaggregation, but also with regards to where you get your information and who you ask. This consideration applies whether you're using primary data or secondary data. In the case of primary data, which you collect yourself, we really need to ask whether we are making an effort to represent all groups of people in our sample, including those who are hard to reach.

In the case of secondary data collection, you need to ask: Does it provide a reliable and representative picture of the population and the subgroups within it, especially marginalized groups? It is not always the case for national-level statistics or administrative data that we can rely on, so in situations like that, we might have to resort to informed estimation. For example, if it's impossible to collect primary data and you find the secondary data is unreliable regarding persons with disabilities, we know that about 15% of the world population has a disability, so we can use this as a reasonable estimate.

This issue of informed estimates is closely related to the second general principle: be very transparent about how those figures are established. It's generally good practice in any kind of research to be very clear and transparent about our methodology, sampling strategy, and tools. Transparency is especially important for baseline data because ideally, we want to be able to use it further down the line to evaluate an intervention and assess its impact. It therefore becomes really important that we are able to use the same indicators and methods in our baseline and endline studies so that change can be consistently and reliably measured for comparison.

00:13:54 Inclusive indicators

Naomi Falkenburg: Indicators can be considered inclusive when they reflect the different groups of people that we have identified in our project environment. Formulating relevant indicators is a precondition for mainstreaming inclusion. They let us track changes in results for different groups of people, help make a course correction if our monitoring data shows us that's necessary, and help keep ourselves accountable for any inclusion goals that we might have.

We can have both person-related and non-person-related indicators that are inclusive. For person-related indicators, which reflect intended changes among people, I find it useful to distinguish between differentiated, specific, and neutral indicators. We would use differentiated indicators according to different groups of beneficiaries to monitor whether the changes are the same for all. An example of this would be if we had a result indicator that differentiates between women and men, or is subdivided into age cohorts. Specific indicators can be used if we are targeting a specific group in our intervention, such as measuring change for women only. Neutral indicators are appropriate in cases where having a certain identity is not relevant to the changes that are being observed.

Non-person-related indicators reflect changes that are not measured in relation to people, but rather in the context where they live and how it influences their potential inclusion or exclusion. These could be looking at laws, regulations, policies, products, services, or the physical environment. For example, in a disaster risk reduction project, you could have an indicator that says, "Disaster risk information is accessible, understandable, and usable for all communities in a geographic area."

One of the best pieces of advice for formulating useful indicators is to combine both quantitative and qualitative indicators. This gives you an indication of what the substance or the quality of the results are. Imagine you're interested in mainstreaming gender equality into a project that has a capacity-building component; you decide to use female-to-male ratio as a quantitative indicator for the training. Even if you find that the course has a high rate of women participating, it doesn't really tell you whether women and men benefited equally from the training. To better understand the actual impact, you would also have to ask more qualitative questions. Equal participation, which we can numerically measure, might be necessary for gender equality, but it's not sufficient by itself.

Finally, we can also use indicators to set internal inclusion targets for ourselves and to keep ourselves accountable for our goals. These indicators will have more to do with internal processes and performance rather than results. For example, output indicators can be used to monitor participation and quality. Input-level indicators can give us very important clues about whether a stated commitment to inclusion is actually being followed up on through an investment of human or financial resources.

00:20:47 Information systems and impact

Alex Bertram: A lot of times M&E systems can be driven by donor reporting requirements, but there are plenty of examples that show that an information system can actually produce more inclusive outcomes for beneficiaries. What Naomi was saying made me think of an example from eastern Congo, where we deployed ActivityInfo for the Non-Food Item Cluster. The cluster was doing distributions for displaced people that were driven out from their homes into some really difficult situations. Initially, the kits included household staples like cooking equipment and water jerrycans.

One thing that came out of some of the fact-finding about these interventions was that a major need that was being overlooked was hygiene kits for menstruating women. The cluster got together and we had really broad consensus among the NGOs that female hygiene kits needed to be included. But it wasn't until we started tracking it that change happened. My colleague added this as a checkbox to the data collection form of ActivityInfo: "Did you distribute female hygiene kits?"

By looking back to that data two or three months later, it was clear that it just wasn't happening. That's not because people didn't care, but there are a lot of priorities in a difficult situation. This information system allowed the cluster leadership to go back and say, "Look guys, we all agreed that we were going to do this, but we're not there yet." That prompted renewed effort to get the procurement for the kits in place. By month four or five, you started to see more distributions that included these. By month six, you saw that there was a more complete execution on this decision. In that case, a simple checkbox gave leadership a way to support the commitment to these inclusive decisions.

00:24:13 Risks of not disaggregating data

Naomi Falkenburg: Data disaggregation is a fundamental element of inclusive M&E and inclusive programming more generally. The risks of not disaggregating are that, firstly, some groups might become invisible in our data, which means we have no information about whether or not they're being reached and whether they're benefiting from our intervention. Linking back to the concept of intersectionality, if we don't disaggregate, we might fail to see whether there are any intersecting forms of discrimination and inequality that are causing people to be left behind.

Secondly, disaggregated data helps us determine what is most effective in order to include and benefit different groups of people. If we don't do it, we have no information to act on in order to improve our intervention. Disaggregated data gives us some clues about who is facing barriers and what these barriers might be. Finally, if we don't disaggregate our data, we cannot keep track of our own inclusion goals and be accountable for them.

When we're talking about social inclusion, we typically look at dimensions such as sex and gender, age, disability, location, race, income, and migratory status. The level of disaggregation that we choose really depends on what information is most relevant and useful to our intervention and our indicators, and to the particular context that we're working in. It is often not possible to rely on existing statistics or administrative data to get the level of disaggregation that we need. National-level statistics generally only show aggregate descriptions or trends, which might mask different situations for other populations that are harder to reach.

Data disaggregation typically requires a more intensive data collection, which is costly and time-consuming. Our choice of level of disaggregation will always depend on weighing the benefits against the practical constraints. In these circumstances, it can be very useful to refer to existing guidance and minimum standards. For example, in humanitarian interventions, the Sphere Handbook sets out minimum standards to disaggregate by sex, age, and disability (SADD). Disability is disaggregated into six domains according to the short set of Washington Group questions.

Besides the practical constraints, our level of disaggregation will depend on whether you can collect this data following the principle of "doing no harm". This means that data on personal identity characteristics should be collected only if it is really necessary and appropriate to do so, if it's used for the benefit of the groups it describes, if it can be kept safe, and if it doesn't create or reinforce existing discrimination. We should be wary as M&E practitioners about imposing an identity category on a population and wherever possible, we should let people self-identify.

00:31:44 Handling missing breakdowns

Alex Bertram: Sometimes we have to make choices about what to collect disaggregated, or it may take time to roll out the processes that are required to get the data that we need. One of the elements of inclusiveness in the response in Libya is making sure that the migrant populations are provided with assistance along with settled populations. But sorting out how many people are migrants versus not can take time, effort, and money. Some partners were saying, "I don't have this information. I can't complete this form."

One thing you can do is phase in disaggregation by starting to track what information is available. In ActivityInfo, for example, you can ask, "Do you have data available regarding how many of the beneficiaries are migrants?" You want to separate hard data from estimates or cases where data is not available. You can set a relevance rule so that if that data is available, then you ask for it.

Then you can add a final calculation with a formula. Basically, if that data was available and provided, use that; otherwise, use your own estimates. For example, you might say, "Based on available data, 20% of the population in this area are migrants," so we're going to use that as an estimate. You can update those estimates as you go along. You can then compare the hard data you are getting with the estimates partners are providing to see if they are in line. Another good addition to this data collection form would be to ask about the origin of the estimation—how did you arrive at this estimate? This approach allows you to phase in data collection requirements based on the context.

00:38:21 Participatory monitoring and evaluation

Naomi Falkenburg: Involving stakeholders, especially affected communities, in the monitoring and evaluation of our interventions can make them more effective, accountable, and sustainable. We combine the theoretical and methodological expertise of practitioners with the real-world knowledge and experience of participating communities. This can improve our processes, our tools, our data, our findings, and ultimately, our impact.

In terms of the instrumental case for the value of participation, we can identify numerous benefits throughout the phases of a typical M&E cycle. In the planning phase, participation can lead to more relevant and meaningful indicators. In the design stage, it can lead to more useful and culturally relevant tools because we use local terminology. During data collection, participation can lead to greater trust in the process, leading to more representative and valid data. In analysis, it allows us to cross-check data and validate findings using the community's understanding of the local context. Finally, participation enhances our ability to generate insights that are locally relevant and helps us translate these findings into meaningful action.

Beyond these instrumental benefits, participation can also be an empowering process. If our ultimate aim is to support real-world impact and transformative change, then the participation of affected communities is a precondition for that. Empowerment is not something that we can do to other people; it is, in itself, a participatory process. There is also a strong human rights-based and ethical case for it. Participation is a key human rights principle found in many international instruments.

To encourage participation, we need to adjust how we think about knowledge and who can do M&E. There's a belief that knowledge can only be produced by outside experts because they are objective. Participatory approaches argue that everyone has biases, including practitioners, and that you don't need to be professionally trained to make a valuable contribution to knowledge. Generally, the people closest to an issue know the most about it. We can encourage participation by planning for it before we start, using methods designed for participation, and taking time to cultivate relationships of trust.

00:47:18 Q&A: Gender-sensitive data collection

Naomi Falkenburg: Regarding data collection in a gender-sensitive way that includes non-binary responses and youth inclusion, the principle of "do no harm" is very important. Standard questionnaires often only have two choices for sex, which conveys how many people still think about these questions. It is important where possible to allow for self-identification. We shouldn't come in and impose our own standardized way of looking at the world; we need to listen to how people identify themselves and make room in our data collection tools to capture those nuances.

Collecting gender-sensitive data starts at the level of disaggregation, making gender visible, but we also need to think about how we phrase our indicators. We shouldn't impose certain characteristics or separate activities by gender based on assumptions. For example, in a household survey measuring child labor, we should address the questionnaire to the person most knowledgeable about the issue rather than automatically assuming it is the mother or the head of the household. It involves questioning our own assumptions about gender identities and gender roles.

00:52:16 Q&A: Sampling small groups

Alex Bertram: If a representative sample does not include a specific group due to its small number, or includes it very lightly, enlarging their number in the sample is often called stratified sampling. There is definitely a role for it in data collection, particularly when you're interested in inclusion. Stratified sampling means you divide your population into groups that are independently sampled.

If you know that a group, like people with disabilities or lactating mothers, makes up a small percentage of the population but has distinct needs, you might choose to oversample those groups to get useful information. If you do a sample of 100 people and the subgroup is only 5%, you end up with five samples, which is often not enough to draw conclusions. You might choose to sample 50 from that group independently. When you zoom out to the whole population for analysis, you must weight those strata appropriately.

Naomi Falkenburg: It depends on the context and your data needs. If you are doing a baseline survey to design an intervention and you want it to be inclusive of certain groups, it is important to zero in on those groups even if they make up a small size of the total population. Again, it is very important to be transparent about the strategy you have used regarding stratification and weighting.

Alex Bertram: Stratification is useful for small groups, but we also must consider non-response bias. Even with a perfectly random sample, you will miss people due to certain attributes, such as homeless people or day laborers who are hard to catch at home. This is non-random bias and requires special attention. Oversampling them independently can help, but you may need to think about alternate sampling strategies based on local knowledge.

00:58:30 Conclusion

Fieke: I'm afraid that we don't have time for more questions. I see there is interest about technology for data collection, analysis, and case tracking systems. ActivityInfo can be used to build a case management system, and you can find a lot of resources on our website. I would like to warmly thank Naomi and Alex for their participation in this webinar. I hope that you enjoyed our discussion. As for the questions that were left unanswered, we hope to have the opportunity to answer them in an upcoming webinar. Thank you very much.

Sign up for our newsletter

Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.

Which topics are you interested in?
Please check at least one of the following to continue.