1. Introduction to Humanitarian MEL
- What is the purpose of this chapter?
This chapter of the toolkit provides practical guidance for country office program and MEL teams to monitor a humanitarian crisis and response and design an appropriate MEL system for emergency response that is effective and efficient.
It focuses on the fundamentals of Monitoring and Evaluation of humanitarian actions. For other MEAL related detailed guidance see the following chapters of the CET:
- Intended audiences?
CARE Country Office MEL teams – both humanitarian and CO’s with new/no emergency programming. It is also meant to provide guidance for the CO Program teams, including the Assistant Country Director and Program Quality Coordinators.
- The importance of MEAL
Monitoring and Evaluation have a range of purposes in humanitarian programming, but among the most critical are ensuring relevance and inclusivity, enhancing accountability to affected populations in order to ultimately achieve the best outcomes for crisis-affected populations from CARE’s humanitarian programming.
Monitoring and evaluation help in understanding how the assistance and support that CARE provides to disaster-affected communities affects them. Therefore, any monitoring and evaluation in CARE is expected to give particular attention to aspects of gender, age and other elements of intersectionality and related differences in vulnerabilities, capacities and thus needs
MEAL is a critical part of CARE’s Humanitarian Accountability Framework (HAF). It allows us to compare the results of our humanitarian actions with our strategic intent (e.g. CARE Vision 2030 , Humanitarian Impact Strategy,) with technical standards (such as Core Humanitarian Standards, SPHERE and companions) and with expected outcomes and benchmarks for the response (from response strategy, proposal frameworks etc.).
Equally important for a humanitarian MEAL system is to support alignment with the three key pillars of Accountability to Affected People (AAP): timely and adequate information sharing (transparency) as well as engagement of crisis affected people in the decision making (participation) and in the review of the response performance (feedback and complaints).
Efficient decision making and evidence-based learning heavily depend on the quality and timeliness of monitoring & evaluation. A Monitoring, Evaluation, Accountability and Learning (MEAL) system for humanitarian response should be adaptable to the scope, scale and the pace of the crisis while at the same time provide a clear indication to the response team about the objectives and activities of the response
Many team members will have responsibility for monitoring and evaluation activities in a humanitarian response. Therefore it is important that a member of the response team is assigned to the function of coordinating monitoring and evaluation activities. This is usually a full time position. It is critical that the CO’s M&E unit is closely involved with the response team from the very outset of the humanitarian crisis. Thus ideally, the response M&E team should be led from the onset (including during needs assessments) by the CO’s M&E coordinator. Where this capacity does not exist, it is important for the CO to appoint or recruit a M&E Coordinator for specifically for the humanitarian operation as quickly as possible. During a fast onset or large scale emergency the CARE emergency response roster can identify and mobilize additional capacities especially during the surge and scale-up phases. In certain cases the M&E Coordinator function can be combined with the function of leading Accountability and Learning initiatives – the MEAL (Monitoring, Evaluation, Accountability and Learning) approach (see CARE’s commitment to Quality and Accountability in Humanitarian Programming).
Position | Key responsibilities |
CO Monitoring and Evaluation Coordinator |
|
RRT and RED roster M&E Specialists |
|
Emergency Team Leader/Senior Management Team (SMT) |
|
Lead Member Quality and Accountability Focal Point |
|
Regional Humanitarian Coordinator |
|
Crisis Coordination Group |
|
CI Monitoring, Evaluation & Accountability Coordinator |
|
Many team members will have responsibility for monitoring and evaluation activities in an emergency response, in particular project managers and field officers who collect data on response activities. It is important that a member of the emergency team is designated overall responsibility for coordinating monitoring and evaluation activities. This is usually the head of the CO’s monitoring and evaluation unit if one exists, and it is critical that they are closely involved with the emergency response team from the very outset of the response. Where this capacity does not exist, it is important that the CO appoint or recruit a Monitoring and Evaluation Coordinator for the emergency response operation as quickly as possible.
The key responsibilities of the Monitoring and Evaluation Coordinator in relation to the emergency response programme are to:
- help establish appropriate indicators at the outset of the emergency response (drawing on benchmarks, Sphere and other available tools)
- establish and coordinate monitoring systems including data collection, analysis and review
- Work closely with the CO Information Manager to prepare specific data collection methods and tools
- coordinate monitoring activities and inputs required of other team members
- Anticipate, plan and support reporting requirements
- ensure information gathered through monitoring activities is shared quickly and in an appropriate format with senior managers so that any problems arising can be addressed
- organise evaluation activities in line with CARE’s learning policy (refer to Annex 9.1 Policy on Learning for
- Humanitarian Action).
Monitoring and evaluation advisors are available through RED roster for deployment to emergencies to assist with setting up and training in monitoring and evaluation systems in emergencies.
Annex 9.2 Job Description for Monitoring and Evaluation Coordinator
Terms | What is measured | Definition |
Baseline | Indicators at the start of the project | Information about the situation a project is trying to affect, showing what it is like before the intervention(s) |
Benchmark | Standard of achievement | A standard of achievement that a project has achieved, which it can compare with other achievements |
Bias | A tendency to make errors in one direction. For example, are there potential for errors because not all key stakeholder groups have been consulted? Are there incentives that reward incorrect information? Does reporting a death in the family mean that food ration levels might be reduced? | |
Outcomes | Effectiveness | Use of outputs and sustained benefits, e.g. how many litres of clean water are available in each household, how many participants show evidence of training being used |
Outputs | Effort | Implementation of activities, e.g. the number of water containers distributed, number of participants trained |
Impact | Change
(can be positive or negative) |
Difference from the original problem situation. At its simplest, impact measurement means asking the people affected, ‘What difference are we making?’ Examples of impact may be a significant reduction in the incidence of water-borne disease or evidence that what trainees have learned is having a tangible impact on project/programme delivery, etc. |
Milestone | Performance at a critical point | A well-defined and significant step towards achieving a target, output, outcome or impact, which allows people to track progress |
Qualitative information | Performance indicators | Qualitative information describes characteristic according to quality (as opposed to quantity) and often includes people’s opinions, views and other subjective assessments. Uses qualitative assessment tools, such as focus groups, interviewing key informants, stakeholder mapping, ranking, analysis of secondary data and observation. Qualitative data collection tools require skill to obtain a credible and relatively unbiased assessment. The key question is: do they provide reliable and valid data of sufficient quantity and quality? |
Quantitative information | Performance indicators | Information about the number of things someone is doing, providing or achieving, or the length of those things, or the number of times they happen |
Triangulation | Consistency between different sets of data | Use of three or more sources or types of information to verify assessment |