Part 2 of 2
Thursday May 23, 2024

Survey implementation

  • Host
    Victoria Manya
  • Panelist
    Fay Candiliari
About this session

About this session

This is the second session we introduce you to survey implementation basic considerations both for the preparation and the implementation phase. We look into sector-specific surveys and cross-cutting surveys.

In summary, we explore:

Preparing to implement M&E surveys:

  • Resourcing and logistics
  • Training enumerators and data collectors

Implementing M&E surveys:

  • Data collection methods and tools design
  • Ensuring data quality and reliability
  • Data quality enforcement (examples with ActivityInfo)
  • Validation (examples with ActivityInfo)

Additional topics:

  • Ethical considerations in survey implementation
  • Monitoring results
  • Analysis of results (examples with ActivityInfo)

View the presentation slides of the Webinar.

Is this Webinar series for me?

  • Are you working in projects or programs in which you are called to develop, monitor and analyze surveys?
  • Are you looking for guidance on good practices for survey design and implementation?
  • Do you wish to ask questions about these topics?

Then, watch our webinar!

Other parts of this series

Other parts of this series

The Monitoring and Evaluation webinar series “Survey Design and Implementation” is a series of three live sessions addressed to M&E professionals working in the social sector. These webinars comprise a course which will help you get a comprehensive understanding of all the steps involved in survey design such as developing questionnaires and ethical considerations and in survey implementation such as designing tools and methods for data collection, monitoring and analyzing results. The third session will bring in real life examples from organizations who have been developing surveys using ActivityInfo.

The series is addressed to entry/intermediate level professionals and it is highly recommended that you join or watch the recordings of all webinars in their consecutive order so as to benefit from the complete course.

Questions and answers

Questions and answers

How to clean data before carrying out quantitative analysis?

Indeed while some of the discrepancies that you will find in your database are legitimate as they reflect variation in the context, others will likely reflect a measurement or entry error. These can range from mistakes due to human error, poorly designed recording systems, or simply because there is incomplete control over the format and type of data imported from external data sources. Data cleaning is therefore essential for accurate quantitative analysis,and this involves several key steps. First, understand the data context and use comprehend variables. Inspect the data to identify and handle missing values, outliers, and verify correct data types. Clean the data by removing or imputing missing values, addressing outliers, converting data types, standardizing units and formats, removing duplicates, and correcting inaccuracies. Transform the data by normalizing numerical values, and aggregating as necessary. Verify the cleaned data against original sources and use statistical methods for validation. Document all cleaning steps and update metadata. Tools like R, and SQL, along with automation through scripts, can facilitate this process.

Can you go back to the example about the size calculator?

You can utilize this sample size calculator for free

How do we use Quasi-experimental methods during evaluation?

Remember that Quasi-experimental design is used when it's not logistically feasible or ethical to conduct randomized, controlled trials. As its name suggests, a Quasi-experimental design is almost a true experiment. However, researchers don't randomly select elements or participants in this type of research as the methods requiring randomization suggest. To use the Quasi experimental method you may consider Quasi-experimental designs without control groups, Quasi-experimental designs that use control groups but no pretest,Quasi-experimental designs that use control groups and pretests and Interrupted time-series designs.

What is your advice as I plan to change my career to becoming an ME officer?

While we are not a career advisory organization, I would give you the same tips for beginners in M&E. Follow courses on the basics of M&E and ActivityInfo provides beginners with a lot of resources to get started including a 1 month free trial of the software. Explore these resources and keep practicing and following our free M&E webinars.

How do we ensure validity in the survey that we have developed or adopted from other sources?

This is a very good question because we often find ourselves conflating “adopting” with “adapting” and they mean two different things. While in adopting you copy verbatim the survey instrument face and content, in adapting you reflect a different context. In general, adopting (using verbatim) is preferable to adapting for a few reasons. First, when the instrument is adopted, then the reliability and validity studies that have been conducted on that instrument can be applied to your study, so you do not have to collect validity evidence. However, when an instrument has been adapted, then it has been significantly changed so the reliability and validity evidence will not apply to your study. Second, adopting an instrument links your evaluation to all other evaluations that have used the same instrument. Finally, adopting the instrument saves you time and energy in making significant changes.

About the Speaker

About the Speaker

Victoria Manya has a diverse background and extensive expertise in data-driven impact, project evaluation, and organizational learning. She holds a Master's degree in local development strategies from Erasmus University in the Netherlands and is currently pursuing a Ph.D. at the African Studies Center at Leiden University. With over ten years of experience, Victoria has collaborated with NGOs, law firms, SaaS companies, tech-enabled startups, higher institutions, and governments across three continents, specializing in research, policy, strategy, knowledge valorization, evaluation, customer education, and learning for development. Her previous roles as a knowledge valorization manager at the INCLUDE platform and as an Organizational Learning Advisor at Sthrive B.V. involved delivering high- quality M&E reports, trainings, ensuring practical knowledge management, and moderating learning platforms, respectively. Today, as a Customer Education Specialist at ActivityInfo, Victoria leverages her experience and understanding of data leverage to assist customers in successfully deploying ActivityInfo.

Transcript

Transcript

00:00:04 Introduction

Thank you for the introduction and hello everyone. It's really good to have you all join us today. Just to mention that this is our second of three sessions in a series on surveys and we hope you attend the next, which would be on the 27th of June.

Today we'll explore the essential considerations for preparing and implementing your M&E survey. We'll cover both sector-specific and cross-cutting methodologies. Beginning with the preparatory phase, we'll discuss resourcing and logistics management. We would, in the process, emphasize the need for adequate resources and comprehensive training for enumerators and data collectors.

We'll move to the implementation phase, where we'll delve into data collection techniques and tools. While prioritizing data quality and reliability, we'll showcase practical examples using ActivityInfo to enforce data quality and validation during survey implementation. Additionally, we'll touch upon ethical considerations, results monitoring, and data analysis, offering you a holistic insight into enhancing your survey implementation process.

00:01:35 Recap and session overview

In our previous session with Eliza, for those who were there, we discussed the intricacies of designing surveys for quantitative data collection. Now, in this second installment of our three-part series, we would bridge the gap between theory and practice by focusing on how to apply this design knowledge in the real world. Specifically, we would explore the practical steps involved in preparing and administering quantitative surveys to our target audience, translating our design concepts into actionable strategies for effective data collection.

00:02:19 Preparing to implement M&E surveys

Resources and logistics are crucial components in preparing for monitoring and evaluation surveys. There are quite a number of things that we need to consider. We have listed a checklist for you, and there could be more depending on your context. Now, before survey deployment, preparing to implement your survey involves several crucial steps. One key aspect is resourcing and logistics, which encompasses various components essential for survey execution. This includes forming a competent team with clear roles and responsibilities covering survey design, data collection, analysis, and reporting.

Additionally, it's crucial to establish a budget—a budget that covers personnel, transportation, communication, equipment, and data management needs. Developing a realistic timeline is also very key, outlining phases for your planning, training, fieldwork, data analysis, and reporting. Pilot testing is another important aspect because you need to test your survey tools before deployment. It helps you to identify and to address any issues, ensuring that the tools are fit for purpose.

Then you have to conduct comprehensive training for the survey team. This is essential because you would cover things like your survey methodologies, data collection techniques, ethical considerations, safety protocols, and proper usage of survey tools. Here you can select technology-supported tools or ICT4D. We will showcase later in the session the use of ActivityInfo, which is an information management software for humanitarian and development operations. Finally, devising an appropriate sampling strategy is vital, considering factors such as population demography, geographical spread, and the required sample size to achieve statistically significant results.

00:04:52 Risk management and logistics

Now, a critical aspect of your preparation is risk management, and it involves identifying potential risks and challenges that could impact the survey process. It could be conflict, or the situation or the context that has necessitated the survey in the first place, like a humanitarian crisis. You have to have a risk management plan. It could also be adverse weather conditions or logistical constraints. To mitigate these risks, you have to have a contingency plan developed to address all potential disruptions and to ensure that the survey activities can continue uninterrupted.

Additionally, you have to establish a comprehensive communication plan. It's essential to maintain clear and effective communication channels within the survey team and with your stakeholders. This ensures that everyone involved in the survey is informed and updated on important developments throughout the process.

You also need to make sure that your logistical arrangements are in order. This encompasses support for your survey team in terms of transportation, accommodation, meals during fieldwork, and the provision of necessary equipment and supplies like vehicles, GPS devices, smartphones or tablets, and stationery. It reminds me of an experience I had where the logistics wasn't quite planned and we ended up sleeping in a place that wasn't physically conducive. Logistical arrangements are really crucial. By addressing all of this, you can better prepare for the implementation of your M&E survey.

00:07:09 Sampling strategies

Now, let's unpack sampling, starting with what available sampling types we have. We have random and non-random sampling techniques. Under random sampling techniques, we have systematic random sampling. This involves selecting participants from a population at regular intervals. After randomly choosing a starting point near the beginning of the list, you skip a set number of units and select the next participants, repeating this process.

The next is stratified random sampling. Stratification divides the population into subgroups, called strata, based on specific characteristics like age or gender. A simple random sample is then taken from each subgroup. This technique ensures that all relevant subgroups are represented.

The next is cluster random sampling. This involves dividing the population into clusters—geographical areas, for instance—then you randomly select some clusters to include in the survey. All members of the chosen clusters would be surveyed. This method is useful when a sampling frame is not available. You also have multistage random sampling, which combines several sampling methods. It often starts with cluster sampling, and then you apply simple or stratified random sampling within the selected cluster.

The other part is the non-random sampling techniques. You have purposive sampling, where you select participants based on specific criteria related to the study. It's useful in emergencies or where specific expertise is needed. You also have snowball sampling, which starts with a few participants who then recommend others. Then you have quota sampling, where you ensure a specific number of different subgroups are included non-randomly. Lastly, you have convenience sampling. Here, you select participants based on availability or volunteerism. It's easy and inexpensive, but it may not be representative of the entire population.

00:10:53 Sampling best practices and criteria

Having considered sampling types, we at all times have to adhere to best practices and procedures recommended by our organizations. However, internationally, there are certain criteria that you should consider. First, most surveys should use random probabilistic samples from the general population rather than sampling quotas. This approach ensures that the survey sample is representative of the entire population, which helps to eliminate bias and allows for results that can be generalized.

You also have to consider full geographical coverage. Effort should be made to include all geographical areas to prevent systematic exclusion based on residence. While some exclusion might be accepted for efficiency reasons, it should not exceed 2% of the total population. Next is controlled non-coverage. You have to ensure that key sociodemographic groups are not disproportionately excluded from the sample, and the coverage rate for each subgroup should be at least 95%.

The next thing you need to consider is stratification. Samples should be stratified geographically and by organizational level to ensure proportional representation of urban and rural populations. You also need to ensure that sampling units are selected proportionally to their size within each stratum to ensure fairness. Finally, sampling decisions at all stages should be randomized to avoid bias, using computer algorithms or manual methods to ensure independence from enumerator judgment.

00:14:05 Determining sample size

You must determine your optimal sample size based on your measurement goals and consider precision. Several factors influence your optimal sample size calculation. One is the rarity of the event. Generally, the more prevalent a phenomenon is within a population, the smaller the sample size needed for a reliable estimate. For example, if physical violence is expected to be more prevalent than sexual violence, you require a larger sample size for a reliable estimate for sexual violence. It's recommended that the sample size be calculated for each indicator and the largest calculated sample size should be used.

The next factor is your response rate. A higher anticipated response rate allows for a smaller gross sample size, whereas a lower response rate necessitates a larger sample size. You also need to consider the precision of estimates, or the desired margin of error. A smaller margin of error requires a larger sample size. Finally, you need to consider your available resources. Financial and resource constraints significantly impact your sample size. However, conducting a survey with an insufficient sample size due to budget limitations is not advisable. The optimal sample size must balance these constraints to ensure reliable and precise estimates.

00:16:56 Sampling deployment strategy

Let's consider an example of a sampling deployment strategy tree by the UNHCR for a Results Monitoring Survey. This tool helps determine the most appropriate sampling methods for different population groups like refugees or IDPs. It mandates checking for an up-to-date and complete registration list. It also checks if the population group exceeds 5,000 individuals. If yes, you have to use a probability sampling method. If no, both probability and non-probability sampling methods may be considered based on resources.

Key takeaways include the importance of registration lists. The list must reflect the current population with minimal changes since the last update (usually less than 10%) and include at least 80% of the total population group. If such a list exists, probability sampling methods are preferred. You also have to look at population size, geographical traceability, and the implementation area size.

00:20:23 Training enumerators

Training is very important and needs to involve all survey team members. You need to allocate time for thorough instructions—some best practices suggest at least two to four days depending on complexity. You should cover essential aspects like the survey approach, questionnaire structure, and format. Additionally, training should extend beyond the questionnaire to include topics like humanitarian principles, confidentiality, integrity, informed consent, and roles and responsibilities.

It is important to create opportunities for your enumerators to give feedback, ask questions, and share concerns before deployment. Conducting field simulations is very important. This helps enumerators familiarize themselves with the technological aspects of data collection. You also have to ensure that enumerators understand how the collected data will be used. Providing a comprehensive enumerator manual and using a training checklist can be really helpful.

00:22:37 Survey deployment phase

During the survey deployment phase, attention needs to be put on ethical considerations. Adhering to ethical guidelines is paramount. It encompasses respect for participants' rights, confidentiality, obtaining informed consent, and handling sensitive information with care.

In terms of field coordination, you have to maintain regular communication and provide updates on progress, issues, and risks. Quality control measures, such as supervising data collectors, conducting spot checks to verify data accuracy, and using data monitoring tools to identify errors early on, are imperative to maintain data integrity. When you prioritize ethical conduct and implement robust field coordination practices, you enhance the effectiveness and credibility of your survey.

00:24:12 Post-survey activities

As we are done from the field, attention shifts to resourcing and logistics activities post-survey, with a focus on data management, analysis, and lessons learned. In terms of data management, securing data storage to protect against loss or tampering is critical. Alongside this is conducting data cleaning to review and correct errors, ensuring you have backups.

Moving to data analysis, conducting primary analysis to identify trends and anomalies is done post-deployment. It is also very important that you triangulate by cross-checking with other data sources to enhance validity. Finally, for lessons learned, conduct debriefing to review successes and challenges, document lessons for future reference, and update your M&E plan to ensure future methodology is improved.

00:26:18 Administering the survey

Surveys can be administered in various formats. First, it could be in-person interviews (face-to-face). It could also be phone interviews. The other is the paper-based questionnaire, though this has downsides like loss of data during transportation. This brings us to online questionnaires, which are becoming increasingly popular through ICT4D tools, allowing participants to provide responses electronically.

For survey design, organizations can utilize online survey platforms. Sample size calculators and statistical sampling software aid in determining the necessary sample size. Following data collection, effective data analysis is essential using statistical analysis software, spreadsheet software, or information management tools like ActivityInfo. These tools help gather reliable information and reduce the stress of manual intervention.

00:29:24 Data quality and reliability

Data quality simply translates to fitness for use. The quality of your statistical information can be defined through six dimensions: relevance, accuracy, timeliness, accessibility, interpretability, and coherence.

Relevance reflects the degree to which the information meets the actual needs of the user. Accuracy refers to the degree to which statistical information correctly describes the phenomenon it is designed to measure. Timeliness concerns the delay between the reference period and availability; there is often a trade-off between timeliness and accuracy. Accessibility refers to how easily information can be obtained and understood. Interpretability reflects the availability of supplementary information and metadata. Coherence indicates the degree to which the information can be successfully integrated with other data within a broader analytical framework.

00:32:05 Enforcing quality in sampling

In ActivityInfo, separate calculators are needed for different types of surveys.

00:37:22 Enforcing quality via field types and properties

You can enforce quality using field types and field properties. In ActivityInfo, restrictions such as character limits and validation rules guide users in entering accurate data. Automated calculations reduce manual errors. Location fields benefit from geographical constraints. Required fields mandate the completion of essential information. Input masks guide users in entering data in specific formats.

Relevance rules bring skip logic to life, ensuring respondents only see relevant questions. Key fields prevent duplication. Various field types like text, numeric, dates, and drop-down lists come with specific properties like range limits and predefined options. Through these mechanisms, you can facilitate the maintenance of high quality and data integrity.

00:40:07 Ongoing quality control

You do not want to wait until the end of your timeline to ensure quality control. Real-time data monitoring is crucial. Utilizing digital tools that support mobile data collection allows you to set up dashboards to monitor incoming data continuously. This enables you to identify patterns, inconsistencies, or gaps as data is collected. Regular checking, such as daily briefings with enumerators, is very important.

Data validation techniques involve implementing consistency checks to flag inconsistent records. You can use GPS tracking to verify enumerators' locations. You have to conduct back-check surveys to validate original datasets, while considering respondent fatigue. The use of redundant control questions is also very important.

00:42:25 Analysis and visualization

Survey results analysis using various techniques helps gain insight. Indicator analysis involves examining indicators across different demographics. Cross-tabulation and subgroup analysis identify important subgroups. Statistical analysis is important for descriptive statistics and trend analysis. Visualization and reporting play a crucial role in presenting your findings clearly. Tools like ActivityInfo facilitate comprehensive analysis through dashboards and reports.

00:43:57 ActivityInfo demonstration

Victoria demonstrates the ActivityInfo interface, showing the sample size calculator where changing the population size updates the required sample size. She then shows a Post-Distribution Monitoring survey using calculated fields, date constraints, and validation fields. Finally, she displays a report with indicator tracking, disaggregation by gender, and various visualization options.

00:46:08 Ethical considerations and archiving

Ethical considerations like informed consent, 'do no harm', confidentiality, and privacy must be discussed during training. You must maximize benefits, ensuring the use of findings is directed at the affected population.

One key continuous activity is monitoring and evaluating your operation. Ask how effective current procedures are in ensuring timely data collection. Systematic archiving is also vital. Establish clear archiving rules and criteria for the repository structure, and ensure these procedures are documented and communicated to all teams.

00:48:10 Questions and answers

Could you share some tips on how to best introduce the key points you have presented about preparing for survey implementation? You can use PowerPoint presentations, but also explore the use of manuals. Manuals are very important because people have different learning styles—visual, auditory, etc. Sometimes people even explore the use of short videos.

Can we apply different sampling strategies for the same survey or one unified sampling strategy? You can apply different sampling strategies. Depending on your context, you can use multi-stage sampling. For instance, you can do cluster sampling first and then decide to do one type of random sampling within that cluster sample.

Can you give more information on questionnaire development, especially to measure knowledge gained on a specific subject? It's very important to know what your survey is aiming at achieving. When you know the goal, you can measure learning. Backcasting can be useful—start from where you want the learner to be and work backward to ask milestone-related questions that elicit the goal of the training.

When should randomized controlled trials (RCTs) be used during sampling? RCTs should be used when you need to determine the causal effects of an intervention by minimizing bias. It is common in clinical trials, educational programs, and social science research to evaluate the effectiveness of a new treatment or intervention compared to a control.

How does ActivityInfo differ from other survey tools? ActivityInfo considers the entire data lifecycle. It is not just a survey tool; it is a data management and information management tool. It includes restrictions to enforce quality control, calculated fields, and a database where you can explore various features to deepen your analysis.

Does this principle apply to qualitative survey data collection? Yes, it does. Some principles can be adapted, especially since quantitative surveys often include text. It is important to adapt these principles based on your context.

Does testing come before training, or training first and then testing? Mostly, testing should come before training because you don't want to train people on something you will correct later. However, the training itself is also a testing ground where you can receive feedback from enumerators.

Can you explain the process from data set analysis to visualization? Collecting the data set involves going to the field or having data come into a database. To analyze, you use features like calculated fields, measures, tables, and functions. Visualization speaks to appeal and interpretation—how you present the data.

Can we integrate KoboCollect data sets into ActivityInfo? Yes, you can.

What are the challenges in data collection of surveys? Challenges include people refusing to respond, security issues or crises springing up, or enumerators going AWOL. You need to be prepared with a risk plan and a contingency plan.

What software is most suitable for analysis for a customer satisfaction survey? I would recommend ActivityInfo, but you should explore it to see if it fits your specific use case.

How would you best advise the selection of schools to be sampled (e.g., 32 schools in 3 districts)? Define the objective and scope, stratify the population (by district and school size), determine your sample size using a calculator, and then use random selection within the strata.

How do you deal with respondent fatigue? Keep the survey short and prioritize questions. Ensure a logical flow. Be engaging in your introduction. Mix question types (multiple choice, Likert scales). Use breaks and progress bars to show respondents how far along they are. Pre-testing is also crucial to adjust based on feedback.

If you realize halfway through that you chose an unfavorable methodology, should you stop? If the methodology is flawed, it is generally recommended that you stop. Continuing can result in poor data quality. Although stopping to retrain and adjust is time-consuming, it is costlier to end up with unusable data. Credibility is essential.

Sign up for our newsletter

Sign up for our newsletter and get notified about new resources on M&E and other interesting articles and ActivityInfo news.

Which topics are you interested in?
Please check at least one of the following to continue.