Best practices for use of evidence for M&E professionals and Program Managers
HostEliza Avgeropoulou
About this webinar
About this webinar
This Webinar is the second session on the topic of "Evidence based decision making for M&E professionals and Program Managers working in humanitarian or development operations". It is a one-hour session ideal for Monitoring and Evaluation professionals or Program Managers who are interested in learning more about practicing evidence-based decision making and a follow-up to the previous session "Quality and use of evidence for M&E professionals and Program Managers".
During this session, we discuss:
- Pathway from data collection to data use and learning and how this is associated with the project life implementation circle?
- The importance of planning for data use: the enabling factor at the project design and launch phase
- How MEAL system design enables evidence-based decision making?
- How the learning plan and the establishment of Feedback, Complaint, Response, Mechanism (FCRM) contributes to this end?
- Project implementation: best practices that improve evidence-based adaptive management, project decision-making and learning
- The importance of a data analysis plan and light monitoring techniques
- The importance of data interpretation workshops and projects review meetings
View the presentation slides of the Webinar.
Is this Webinar for me?
- Are you an M&E practitioner or Program Manager who wishes to apply best practices for evidence-based decision-making to improve the quality and impact of your work?
- Are you responsible for leading M&E in your organization, or is that a role you would like to take on and you would like your practices to focus on reliable and quality evidence?
Then, watch our Webinar!
About the Trainer
About the Trainer
Ms Eliza Avgeropoulou earned her BSc from Athens University of Economics and Business, and her MSc degree in Economic Development and Growth from Lund University and Carlos III University, Madrid. She brings eight years of experience in M&E in international NGOs, including CARE, Innovations for Poverty Action and Catholic Relief Services (CRS). The past five years, she has led the MEAL system design for various multi-stakeholders’ projects focusing on education, livelihoods, protection and cash. She believes that evidence-based decision making is the core of high quality program implementation. She now joins us as our M&E Implementation Specialist, bringing together her experience on the ground and passion for data-driven decision making to help our customers achieve success with ActivityInfo.
Transcript
Transcript
00:00:00
Introduction
Thanks for the great introduction. So yeah, as you mentioned with the introduction, my passion is for evidence-based decision making. So this is why we decided to do a series of two different webinars. So this is the second of the two; the first one focused on quality and use of evidence, and the second one, we're going to focus on best practices for use of evidence. So what we actually aspire to do is associate the second webinar with the first one.
The general setting for the webinar was that actually we believe that it is crucial to raise awareness of the quality of evidence. We are aware and we need to be conscious of the fact that in the humanitarian and the development sector, given circumstances, sometimes we are accepting lower quality of evidence, but we need to be aware of how this may affect the use of evidence. Second, we want to raise awareness regarding development tools that we have and best practices to enable evidence-based decision making. Acknowledging, at the same time, the fact that in the past decades, there has been tremendous improvement in how the humanitarian and development sector uses the data that they gather.
In terms of this session's objectives, we have three main objectives and sub-objectives consequently. So the first one is that we aim actually to illustrate the pathway from data collection to data use and learning, and how this is associated with the project life cycle. And within these, how actually the MEAL system, where does it fit? And how it enables evidence-based decision making.
The second one is the importance of planning for data use. I consider that it is crucial. A very solid planning for the actual use of evidence, and having said that, we're going to go through the importance of the MEAL plan or Performance Management Plan, which is the synonym, the learning questions, and the development, the establishment of Feedback, Complaint, Response Mechanism. Stakeholder communication plan, and how the evaluation plan contributes to the use of information. We're not going to go into too much detail because all those tools that I refer to are huge topics, but we aim that actually, we give the illustration of how they contribute in a way that can enable us to use the data that we gather.
And the third one: During project implementation, there are various tools and practices that we can use. For today's webinar, we're going to focus on the importance of the interpretation meetings, project review meetings, the importance of having a data analysis plan in place, using light monitoring techniques when we don't have time or we are under pressure, and how we can use technology to our advantage.
00:04:18
Key messages from the previous webinar
I want to highlight some of the key messages from the previous webinar. So, while the previous webinar focused on the quality and use of evidence, what is evidence? Evidence is information related to a specific proposition which can be used to support or challenge a proposition. What is critical to remember here is that our failure actually to generate and use evidence makes humanitarian and development less effective, ethical, and less accountable. And this is actually the main key point why we want to generate and use subsequently evidence.
The quality of evidence reflects to what extent this information can be trusted and thus can be used. Actually, the quality refers both to the data and the methods that we use to analyze this data. And it is important to remember that because also there was a relevant question during the webinar. One example: how the data, the quality of data, can actually affect the quality of evidence. It's maybe an incorrect use of the method that we use. So if we think we have broad categories, qualitative and quantitative, for quantitative and qualitative data respectively, if we use, by mistake, or because of bias, qualitative data in order to generalize results, this will lead to incorrect conclusions. So, this is actually kind of an example of how the choice of our methods can actually affect the quality of the data and subsequently the evidence that we use.
It is crucial to remember that quality of evidence is not associated with the fact whether we decide to go with qualitative, quantitative, or mixed methods approach. This is irrespective of our ability to generate quality. My intention today is not to talk about the quality of evidence, and that is why we actually want to provide you with different tools of how we can best use evidence during strategic planning and implementation. And the point for me is that evidence needs to be available at the time of decision-making. Like, imagine a situation that you have generated evidence months ago, it hasn't been used, but after some months maybe the context has shifted, the project has changed. So at this stage, the evidence that we generated six months ago cannot be used anymore.
In the previous webinar, while closing the previous session, we have actually referred to five guiding principles. That we consider that in this webinar, they are crucial to mention again. As we go through, I will try to actually show how in practice we follow these guiding principles. The first one is to use methodologies that have been previously tested, either for quantitative data collection or for qualitative data collection. We need to ensure that investments that we do in order to gather data actually correspond to the importance of the questions that we answer, and here as I go through, I think the MEAL plan, the logframe, and the MEAL plan that illustrates this case, especially when we consider 'nice to know' versus 'need to know' information.
We need to have cross-team collaboration internally and externally. Frequently in this sector, we have the question like, whose responsibility is it? Is it the programming team that needs this, or is it the monitoring, evaluation team, or Information Management team that plays this? As we go through, I will actually try to make this connection. So the reality is somewhere in the middle. Generally speaking, when we first start designing our logic models, yes, it's the programming team. Not really, because we need their experience. And as we move into the process, then the role of Information Management teams and MEAL teams, they become more and more prominent, having the lead instead of the support. We need to think long-term as well, and this is how the tools that we will go through, they kind of guide us into thinking long-term. And of course, the crucial one is that in the whole process, we need to actually take the feedback of the people that we want to serve.
00:09:50
The pathway from data collection to data use
Going to the second session here, actually, I want to make a distinction between the project cycle and where the MEAL system design fits. So, many of you are familiar with the implementation cycle. So, broadly speaking, we start in an ideal world, and a world with linear steps. We start with an assessment and analysis. Then we move to strategic planning: what actually, what type of intervention do we need? Resource mobilization, and then we move forward, hopefully, with implementation, until it comes to the last phase, which is the closeout of the project. And frequently, the evaluation takes place there.
So imagine that we have also learning in this process. How does learning fit in? In reality, it fits across all those phases. So we do the needs assessment and analysis because we want to learn something for the next phase. And in the next phase, we use the learning from the previous phase, in order to mobilize resources and move into phase four, with actual implementation and the monitoring. During implementation, we use actually the findings in order to adjust our projects in order to improve our projects. And then finally, we move to the last phase where we want to generate best practices, lessons learned, best practices for the next phase of the project, lessons learned for other programs, etc.
So where does the MEAL system fit in? So it's crucial, and it starts at the strategic planning. In reality, as we go through, we see that it starts at the moment that we submit a proposal, basically. It starts at the moment that we have the idea about the intervention, not even having written a proposal at this stage. And it is crucial to remember that the MEAL system enables us at this stage to count, confirm, check, change, and communicate. These are the five Cs in abbreviation.
And, what do we need to have in mind? Because usually I don't differentiate between the humanitarian and development sectors for the purpose of this webinar, but I think it's crucial to actually mention that there are emergency projects under the humanitarian sector. And usually we're going to hear many times the 'good enough' approach. The 'good enough' approach does not mean that we skip steps; simply, it means that we somehow find lighter and easier approaches in order to balance out time and resources. What we need to have in mind is that, as I see the MEAL system design is kind of the core, the heart of this very decision-making.
In terms of the pathway, I tried to illustrate it as best as I could. We have step one: we collect the data, we clean up data, we perform extra calculations. If they are qualitative data, and then we perform our analysis. So we prepare for all the subsequent analysis, both qualitative and quantitative. The fourth step, which I think is of utmost importance and sometimes maybe skipped completely due to lack of time, is interpretation. So basically, we want to bring both quantitative and qualitative data into a common room with other people's perspectives in order to validate the findings, in order to give those findings meaning, and also in order to be able to triangulate our perspectives and to reduce the biases that we have as individuals towards the interpretation and what we see in a specific context, with, of course, the ultimate objective to use data.
And why would we use data? Because we want to solve these issues. We want adaptive management. Adaptive management, which I will use as a term throughout the webinar, has to do with all this different learning that takes place during project implementation. So we learn something, we adjust, we learn, we adjust. This is how we improve the project implementation, as well as for reporting, of course, to the donors, and documenting lessons learned for other projects and other periods.
00:17:31
Importance of planning for data use
Moving forward to the importance of planning for data use, I want to make the link because, I mean, with many tools, the MEAL system design includes many different components and, okay, where do they fit when I implement a project? Because I realize I use them for the project implementation, firstly. So in the strategic planning, this is before even implementation has started, we do the proposal development. And there we have, for the first time, the logic models. Logic models, referring to the Theory of Change, Results Framework, a logical framework.
And then we start implementation. Day zero, day one, here we start planning, so we take whatever we have in that proposal and we start building on this, we validate it, and we start building the MEAL system. We start actually with the first learning question, thinking through our feedback and response, and do the evaluation planning. So always, they start coming together, and then we move a couple of months or weeks after starting implementation and we start thinking through the design and launch. So where actually we put, like, very concretely, data flows. We start thinking of how we're going to gather information, what kind of ICT4D technology can we use? We put exactly the learning plan there. We construct calendars and also incorporate those activities in detailed implementation plans. We actually refine the whole communication plan. And finally, we orient and we train the staff that actually, they're going to use this.
Later during implementation, we start using it actually. So there, we start collecting data, use the data, perform quarterly meeting events. So we kind of encourage internally this whole use of data in order, basically, to determine and learn through this process, generate lessons learned for next periods, revise as we go. So imagine this is a circle of actually being repeated multiple times within implementation.
00:19:48
Logic models and evidence-based decision making
So how do, during the strategic planning, the logic models contribute in evidence-based decision making? So as I mentioned, we have logic models that consider Theory of Change, Results Framework, and Logical Framework. So we have just a reminder, because I use all this terminology: Theory of Change is that initial hypothesis, but there is always in the proposal, even though it's not explicit in. Sometimes it's the long way to say it's the pathway of change. The Results Framework goes a bit further, and the way we identify the vertical hierarchy and the causal logic of the model, and then there we start thinking of the information that we need in reality, and then it comes the logframe, which is verification or assumptions.
So what we can actually do, starting from the Theory of Change, and what is important to consider is that, okay, the Theory of Change starts existing when we actually plan what we're going to do, like the implementation, but in reality builds on the assessments. So the data use starts, literally, there very actively. So we take this assessment data, whatever we have, and we know in the context of specific countries. It's advisable, depending on the intervention that actually we perform, to use a specific or already tested conceptual framework—conceptual frameworks such as the one that UNICEF has for nutrition, for example.
We need to include the relevant stakeholders, and here the programming team has a leading role. It's not the MEAL team or Information Management team, because they know, they know better the context, like they are the ones that are out in the field. They have expectations for specific projects. It's important that we need to treat Theory of Change and everything that comes into the design as living documents. That we have them, we need to have open eyes to adjust them. And because otherwise, we run the risk that we have a Theory of Change that does not reflect the context anymore, does not say anything, so why have it in the first place?
Next one, it's the logframe. So if the MEAL system design is the heart of the decision-making, the indicators are the heart of the MEAL system. We need to start thinking about them early in the process. We have indicators that are mandatory and we include them early in the life of the projects. But we are also to include other indicators that for us as programming teams, it may be that are useful. Here we need to avoid this 'nice to know' information, and here, it's what I mentioned as the guiding principle that we need to balance out the investment because if we include too much information inside, like too many indicators that we are not going to use, the effort associated with data collection can be an effort that we cannot undertake.
SMART indicators only. And as we move through the MEAL system, I will explain more about how we can determine and why. Actually, as the MEAL plan is crucial to evaluate whether indicators are SMART. We need to identify the opportunities that data we have frequently. So it's very nice to use them instead of duplicating or having to go outside for data collection. And here we can start thinking through the means of verification, actually the measurement methods. And here, actually, we need to start all about balancing out what we can do and what we cannot do realistically.
00:27:20
Tools for planning data use
Here I'm going to go through some tools that I have found that actually have very high added value in this process, without undermining any other tool within the MEAL system design. Nevertheless, so the MEAL plan or Performance Management Plan, which I consider as my reality check. Whether I can do it, I can gather my indicators. The learning questions development, that is basically a scenario we can start, and relatively we start tackling them early in the process. Feedback, Complaint, Response Mechanism (FCRM) that we have it for accountability reasons, because we need to have active communication with the beneficiaries. But nevertheless, it is a great source of information. Stakeholder communication plan, because frequently we can focus on how the stakeholders, their needs, how we share information with them, what format, the frequency, who is sharing information with them. And the evaluation planning, last but not least.
So, going to the MEAL plan or Performance Management, here, okay, we have our representative statements coming from the results framework, indicators coming from the logframe, and then we aim to answer some extra questions, like: who, how do we get the indicators defined, who is responsible for the monitoring activities, data collection, when we're going to implement those activities, how we analyze the information, and how we use this information. So, that is why I said previously that it is the reality check, because by filling in the MEAL plan, it's the reality check. So, by going through the indicators, how to collect them, the resources that I need, when I collect them, and how I'm going to use them, and how I'm going to analyze them, it actually helps narrow down the type of information that I need and I will go out and gather.
And the second, it's actually the fact that we started identifying those learning questions as we go through down the path of those discussions, down the path of validating the data, of seeing the RF logframe to create our MEAL plan. It's better if you find them early in the process. And of course, all of the tools remain as live documents; they change, they're relevant at the moment that we actually have the discussions. Maybe they were not relevant three months ago. This is fine.
The third one, which is crucial, is the fact that we established FCRM, because we need to have a line with those that we serve and we offer this communication line. But I think that it's crucial to highlight that we generate so much data from feedback response mechanisms that, in any case, it exists. So we have our plans somewhere by that time. During early implementation, we do the feedback, response, and complaint mechanisms flows. Or we create, well, because we think through the type of information that we're going to need, so basically consolidating this along with the learning plan actually contributes to the creation of the data flow.
Next we have a stakeholder communication plan. So frequently there's a question like: with whom, how frequently I share information? If we think it entirely through the process, like at the time that we design everything—and of course this can change because it's a live document—we can identify stakeholders, the information needed at the time, how frequently we share information, how we share that information, and who's responsible to share that information. And there, actually, we can identify the different means that we have to share the information.
Last for this third section is that I think it's crucial to mention also evaluation planning. So frequently we don't go through the evaluation design. Sometimes, like, there are discussions when we start implementation. However, all projects should include some sort of evaluation of activity. Small projects may choose a very light, simple evaluation, other projects, especially large multi-year initiatives, commit to conduct more complex evaluations in addition to the regular monitoring activities. If we don't have an evaluation in place, an after-action review also can share the same purpose. It's better to plan early in the process because it's also a cost.
00:37:28
Tools and practices during project implementation
Moving forward, there are some tools that I found very important, and some of those we think through in the process. The data interpretation meeting. This is associated with—you can recall the graph with data collection—the step, four steps, it's the third step actually, and it's crucial in the process because it helps mitigate the bias and brings everybody into the same room and on the same page. And it very much tailors with the guiding principle regarding that we need to strategically involve stakeholders in the process. Project or community teams, equal importance to reflect on the data and what has worked or has not worked up to the specific moment. Data analysis plan helps ensure the quality from data collection to data use. Light monitoring techniques are there, and we don't need to go very big with representative all the time because maybe we don't have the capacity or the time, and use of technology because you have the technology, so it's better for you to use it to your advantage.
00:38:41
Data interpretation and project review meetings
Moving to the interpretation meeting, as I said, this is a great opportunity to triangulate perspectives and reveal patterns and have real meaning to the data that actually is closer to the reality, as exactly as possible. The key questions for those interpretation meetings are actually, we as participants: what do the data tell us? What factors explain the findings? What factors explained differences across comparison, disaggregation groups, and whether we missed any specific information for the questions that we want to answer? And that is important; it's part of this learning process. Actually, so frequently we go from that meeting and then we get everything in the same room and we're like, 'Okay, I cannot answer the question. I missed information,' so frequently we go back.
Next one, it's the project review meeting, so it's crucial for adaptive management that actually we create an action plan. It's frequently the case that even programming teams have access to dashboards. Maybe visualization is etc., but maybe they don't have time to do it. Like everything is moving too fast. So by taking a step back and putting them in the same room, we actually have the opportunity to go through what has worked well, what has not worked well, and proceed with action planning. Similarly, for the MEAL system. What is working on the MEAL system, proceed also with action planning. The crucial thing for me here is actually to tailor with the data that we gather. So frequently, we can go hand in hand with, 'I have these data and these data show me this progress doesn't correspond to what you see from the field.' Like, is it real? Is it not real? And does it mean something? So actually produce something to improve.
00:41:14
Light monitoring and data analysis plans
It's so important not to forget the light monitoring. So we have the formal monitoring actually we perform and we schedule in the MEAL plan. And we track progress against project activities and indicators. We have a differentiation, like we have very good monitoring. I remember going up in the scale, using certain surveys and representative samples. You may have light monitoring, light monitoring that actually collects feedback timely, maybe having incorporated these via qualitative means in my field visits, simple observations, or something that allows us to take information, do my reality checks.
Next one is the importance of a data analysis plan. So the general plan puts everything in the same document. Some key components of a data analysis plan are here, whether the sampling methodology, the timing, and the modality for the collection, the methods we're going to use, the quality checks, who is doing what in the collection, how I'm going to analyze my information, what kind of disaggregation do I have, what kind of software do I use, or where I'm going to save my data at the end of the day. Frequently, one corresponds completely to making a formal plan. Because this information that I mentioned here is in the MEAL plan. Sometimes we're speaking about completely other data collection activities, so it's from scratch, the design and the development. It's crucial to guarantee actually data quality and quality of evidence because we go through the process of thinking through the whole process and what can impact our results.
00:43:13
Using technology for data use
The next one. It's the use of technology that we have, and it's a great opportunity. There is a line, I think, that here the later you use data, we need to visit the analysis and we need to discuss data with relevant stakeholders. Frequently, we use the use of technology in order to speed up this process, and the most common threat—not exclusively, but it's commonly what is present within humanitarian and development organizations, is actually the existence of parallel systems.
So even if we don't have parallel systems in place, the use of technology per se, like if I use simply Excel files, for instance. By taking out, even if I use one single Excel file and I have streamlined everything, by taking out, like, the Excel file and using technology, that's the full use of technology that we can use in the field. An interface by multiple people will help us to increase the use of real-time data and easier data analysis and sharing.
So what do we mean by parallel systems? What is a system? But all too frequently we have come up with, it's like having the position of the MEAL person, so Information Management having to control information from multiple data sources for the same indicator. It's a headache for us. We have done it in the past, so it's much better to streamline the information, and we can use technology actually to avoid the existence of parallel systems.
What I want to highlight as an example, as for me I think the most, the most admirable kind of way technology helps is real-time data, which is valuable indeed. So I found a case study that we have in ActivityInfo. So it refers to the UNHCR dashboard in Lebanon in 2020. But here actually, you know, like information was updated and they were using ActivityInfo because they were getting data in real time. And this is impressive, and it's at the same time saving. It saves time from data analysis. So, an example of this, it's actually taken from this dashboard of 2020. That is a UNHCR dashboard and its integration within ActivityInfo with Power BI. It's a very good example of how technology can enable this real-time use of information.
00:47:36
Questions and answers
Participant: How can managers really know they are making good use of data for decision-making? I mean, that we can really—other managers can really say, 'Okay, we are really making good use of this data available at these situations,' where the data can be good and managers don't know how to use it. Or situations where the data might be bad and the managers might not easily identify that it is not, I mean, that optimal for decision-making.
Eliza: So, I mean, there are a couple of ways to mitigate this. The first one is, in order to address the quality issue that refers to the data and the method, very much this process of the MEAL design helps you through actually thinking about the proposed data analysis plan for this end. When actually the program managers, maybe they're not familiar or experienced with data and the visualization, there it is crucial that all of the Information Management team, and the MEAL teams, support in the process. Support in the sense that to share information like data visualizations, needs an easy way to share information along with some key findings. Most of the time, to facilitate them during the visualization, project quarterly meetings, you start frequently using data visualizations and key findings. You will find after a while, it's going to go well, but actually, program managers, they start being used to it. The first option is education for them and frequent project meetings where you actually bring information, bring data in different kinds of formats.
Participant: We hear a lot about learning questions. So what exactly is the difference between learning questions, evaluation questions, and then we are collecting monitoring data. What's the difference between the monitoring data that we are collecting and learning questions and evaluation questions?
Eliza: That's a very good question. So the first thing for monitoring that you have primarily on this, the main plan, what is the information? The MEAL plan's information, the information associated with indicators of the results framework. So if you think of the results framework that you have, the monitoring, it's the timing. The crucial part of monitoring is the timely information, and timely information refers to the lower level of the results framework: activities, outputs, and outcomes, intermediate results. Then, evaluation comes frequently on the upper level. The evaluation is an event that will not take place every day, even the data collection, every month, every quarter. The evaluation maybe will take place yearly, finally, evaluation of the project. So it's something that actually aims to examine the higher level of objectives and sustainability of the project.
The learning questions actually contribute, as I see them, they're kind of in-between steps between monitoring, which is a very... it's a... to the evaluation, which will take place every six months, every year, or at the end of two years. So, the learning plan comes there and actually examines the three different objectives for the learning plan. It's either on the learning plan you want to generate specific sector-specific best practices because you have done pilot interventions or something similar. Or you may want to use—and this is kind of closer to the activity level—to see whether I'm advancing and contributing towards the next level with the activity that I'm implementing. Or we're going to use it also for organizational learning.
Participant: Any difference between data interpretation meetings and dissemination workshops?
Eliza: I would say that, actually, the difference is that in the interpretation meeting, I'm not aiming to disseminate results. So I haven't arrived at the evidence yet; I am a step before the evidence. I'm actually trying to conclude what is the evidence. So imagine, a very nice example. Okay, imagine that actually you have done focus group discussions, multiple discussions, and we have qualitative results—in that case, qualitative data. So we gather in the same room, we gather qualitative data. So, in the same room, actually, we have the question, the objective why we went out for data collection and the participants. They aim to see exactly what the data that we have received is telling me and how I explain differences, which are the differences that I note in the data and that corresponds to what I know from the field. In the dissemination, I imagine, if it's what I imagine. That actually, you will go there with a finding and you will say, I have figured out that these patterns between male and female based on discussions as this was the finding. So you go there with a conclusion with evidence.
Participant: Could you mention what is meant by routine statistics used in the graph?
Eliza: So these statistics are actually those, for instance, close to the existing secondary data. So these statistics are existing statistics that many services they have, for instance, as it can be the case. An example, an example could be in Greece, for instance, there was UNHCR. That routinely, by the main site, they were publishing monthly updates, so they have the relevant disaggregation or they had their like database on the whole population or different findings, so this could fall under the category of routine statistics.
Participant: How do we ensure theory of change could best capture the resilience indicators? Like for example, food security and indicators during monitoring and evaluation phase of the project.
Eliza: Theory of change examines the pathway most frequently from strategic objective to the goal. Usually the goal, it's not frequently in the results. It is not something that we measure because it's long term and it's outside of our responsibility. Like, it's gonna happen a year after now, it gives the direction. So it depends a lot on which level we place those different indicators. It sounds like indicators we usually place at the objective level indicator. So there you need to actually try to examine to the extent that is possible because it's quite challenging actually to confirm this hypothesis, due to the goal's long-term vision. To examine how they contribute. And frequently, there in those in the upper levels, we go outside for the data collection. So it's not going to be from programming data that we're going to use, as we have at activity level or lower levels.
Participant: Can you please tell me about the learning cycle and the headings of the learning register to capture the learning for the project in the field?
Eliza: So, the learning, if you can name it a cycle. I mean, as I it could be a cycle. So basically, if I just understood correctly your question, or you want to have a map, the learning is actually being it's kind of the background, the vehicle base, which... so I learn throughout this process. It's not a single time, specific point in time that I learn. I learn every day, every time that I use information from every process. So, if you have different processes, if you have the process, the cycle of the project manager, I'm learning over there and I'm learning with each phase. If you have the process and the process of the MEAL system design, I'm learning there and I am learning across the different steps.
Participant: How do you document learning from the collected data evidence? I mean, do you produce a learning document for the project use, or merely discussing the lessons we learned in the meeting with project staff and stakeholders?
Eliza: It depends a lot whether there's something being implemented for the first time. So there's no previous experience within the country office or within the organization, and there you need to communicate it, you're going to document it, to transfer it to other country offices, to other teams, to other countries, to other projects. If you do something that actually has been documented previously, then simply the learning is maybe for internal use in that case, except if it's something that has arisen for the first time. So, all the first ones need documentation, as I say, otherwise you can go by sharing internally, and then it depends on the purpose, how you're going to use the learning.
Participant: How do we collect information and determine a course of action for the strategy development?
Eliza: I would say that actually this depends. First of all, the basis that an organization has to determine what are the strategic goals. But they have, like, determining the main objectives is the first step because otherwise, you cannot know how to fit in information. Yeah, and then the next one is that there are multiple... one of the ways is to determine indicators that can be used across different countries, different projects, depending on the strategic objectives and the areas that the organization actually is affecting, in order to be able to gather, to somehow put minimum criteria on those indicators or establishment in the first place, and then criteria. Create the process for the data collection, then the data analysis to guarantee that actually they're collected, and then use this indicator via processing, analysis, interpretation, or even reporting on a higher level.
Participant: How and when does adaptive management feedback be a basic input for program quality assurance and sustainability?
Eliza: So basically, one of the purposes of the use of data is to adapt very easily and implement projects as I go. When it goes, this means that actually, first of all, the improvement and quality assurance because actually, in the first place, things can be improved based on the information that I got from different stakeholders, through the participants. So, as I see it, adaptive management is kind of not the vehicle, but enables actually the quality assurance within the projects. So, acknowledging the facts, I acknowledge feedback is like my everyday work, for instance, staying in the project implementation and they move forward in order to improve what I have in place because I want to serve better the people for which the program intervention is being targeted.
Participant: As a MEAL, what can I do if I receive a complaint from one of the beneficiaries?
Eliza: So, in the first place, in terms of the FCRM, there are standard operating procedures in place, like how I'm going to manage complaints and how they are brought in, and whether it's an unsensitive complaint or sensitive safeguarding. So the process differentiates for the two categories a lot. And the SOPs, they kind of describe that type of next actions by describing the roles and responsibilities. So for me, even in FCRM, that maybe it wasn't even next webinar, there is a specific process from identifying the type of feedback that I will receive, what type of feedback I can address—please be realistic. Some of those cannot be addressed by me or by my organization—and who is doing what within the system. So, it's flowing, basically, within the MEAL system design.
Participant: If I understood well, you said that the learning is kind of between monitoring and evaluation. So, in that case, when is it optimal to do a capitalization while the project is still running, midpoint or at the end?
Eliza: Going back to the time component of when I'm using the information, I would say that actually it's more efficient if you could do it as you go. You have learning that actually takes, it may be some lower level, and you want to use it at the moment there, and it's fine. And if it's not working, maybe during implementation you see that as learning that actually it's worth it, making sure capitalization, or you go save time, because otherwise it's like things in interpretation that you give in the results at the moment of the meetings, you're not going to recall this interpretation or what happened, how my results were explained, even in terms of monitoring or learning-like questions, when you move one year after the project to write the lessons learned.
Participant: I was wondering how we can use it in the private sector because it's not so common, and I want to know these practices in the private sector setting.
Eliza: That's a very good question. I think that my first response is that I'd say there are some tools, if not all the tools—but I will not signal now all the tools—but for sure, for instance, MEAL plan, like the indicators that they logframe and then use in order to track my progress against my goals, it's something that is for sure applicable to the private sector. Like, we call it indicators; in the private sector, they would say 'Key Performance Indicators', KPIs. As long as within the private sector, the specific team has gone through the process of identifying: 'Okay, what is my objective as an entity, as a private entity, and how I'm going to succeed in this objective?' So that will give you the statements. Then it's easier to use the information that you have there during the work or additional gathering during different surveys, etc., all discussions relevant to stakeholders, and monitor your results.
Sign up for our newsletter
Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.