3.1. Basics of MEL in humanitarian contexts

Monitoring and Evaluation have a range of purposes in humanitarian programming, but among the most critical to ultimately achieve the best outcomes for crisis-affected populations are:

  • ensuring relevance and inclusivity of the humanitarian interventions: who needs what – how urgently – where; expressed in realistic targets and timelines,
  • measuring outputs and outcomes of humanitarian interventions against meaningful and manageable set of quantitative & qualitative indicators and/or questions
  • understanding the wider effects of assistance and support provided by CARE and others on disaster-affected communities and people

Therefore, any monitoring and evaluation in CARE is expected to give particular attention to aspects of gender, age and other elements of intersectionality and related differences in vulnerabilities, capacities and thus needs

Efficient decision making and evidence-based learning heavily depend on the quality and timeliness of monitoring & evaluation. A Monitoring, Evaluation and Learning (MEL) system for humanitarian responses should be adaptable to the scope, scale and the pace of the crisis while at the same time provide a clear indication to the response team about the objectives and activities of the response.

 

Roles and Responsibilities of a CARE Response ME(A)L Coordinator

• Ensure an appropriate ME(A)L system is in place and is functioning satisfactorily:
- appropriate indicators at the outset of the emergency response (drawing on CARE global indicators, Sphere a.o.)
- tools and plans for data collection, analysis and review
- reporting formats and calendar
• Periodically review and revise the system so that it is adapted appropriately to changing operating contexts.
• Ensure relevant and timely information regarding response progress, outputs and results is provided in user-friendly formats to key stakeholders, including CO response / senior management team for decision making, beneficiary communities, and donors.
• Provide training and mentoring for CO staff, esp. ME(A)L team
• Act as a focal point to organise and manage regular response reviews (RTR, RAR, AAR), external evaluations and multi-agency reviews/evaluations.

  • Overall responsibilities

Many team members will have responsibility for monitoring and evaluation activities in a humanitarian response whether full or part time, some of them might have already existing positions with MEL functions, others will be newly recruited.  To establish an efficient MEL unit and a coherent M&E System it is therefore important that a member of the response team is assigned to the function of a MEL Coordinator or Manager, considered a critical position in the management structure of a CARE response. This is usually a full-time position that will be responsible for bringing all staff members involved in M&E into an efficient unit.

It is critical that the CO’s MEAL unit is closely involved with the response team from the very outset of the humanitarian crisis. Thus ideally, the response MEL team should be led from the onset (including during needs assessments) by an experienced MEL Coordinator. Where this capacity does not exist, it is important for the CO to appoint or recruit a MEL Coordinator for specifically for the humanitarian operation as quickly as possible. The CARE emergency response roster can identify and mobilize additional capacities especially during surge and fast scale-up or adaptation phases.

With the adequate expertise and within the appropriate set up the MEL Coordinator function can be combined with the function of leading Accountability initiatives in order to lead the MEAL (Monitoring, Evaluation, Accountability and Learning) approach.

For the Roles and Responsibilities of a CO Response ME(A)L Coordinator or Manager please see Box.

A generic MEAL Manager JD and a sample ME Manager JD can be found in the RESSOURCES Section of this chapter.

Other ME(A)L responsibilities for key positions involved in humanitarian responses by CARE include

Position Key responsibilities with regards to MEL in humanitarian contexts
Emergency Team Leader/Senior Management Team (SMT)
Lead Member Quality and Accountability Focal Point
  • Monitor implementation of MEAL systems for the emergency response and organise technical assistance and support as necessary.
Regional Humanitarian Coordinator (RHC)
  • Promote and guide quality in the emergency programme, and ensure critical gaps are identified and addressed.
Crisis Coordination Group (CEG)
  • Determine the crisis/emergency type and the response model, and
  • agree on the relevant MEAL benchmarks throughout the response management cycle such as the need, timing scale and scope for CARE monitoring visit(s), response reviews and external evaluations.
CI Humanitarian Monitoring, Evaluation & Accountability Coordinator (MEAC) Provide technical support to COs related to:

  • compliance with CARE’s humanitarian standards (CHS, HAF)
  • enhancing efficiency of MEAL systems for humanitarian programming
  • organisations and management of response reviews and organisations
  •  ‘learning in’ where lessons learned from previous humanitarian programmes esp. regarding MEA are applied in CARE’s emergency responses and
  • ‘learning out’ where lessons learned from the review of ongoing emergencies are captured and shared beyond the CO.

 

  • Specific responsibilities -> see stages (TBD) and specific sectors

 

 

Terms What is measured Definition
Baseline Initial conditions / needs before the intervention IInformation about the situation a project is trying to affect, showing what it is like before the intervention(s); mostly related to needs and Outcome or Impact level indicators
Benchmark Standard of achievement A standard of achievement that a project is expected to achieve / has achieved, which it can compare with other achievements
Milestone Performance at a critical point A well-defined and significant step towards achieving a target, output, outcome or impact, which allows people to track progress
Bias A tendency to make errors in one direction. For example, are there potential for errors because not all key stakeholder groups have been consulted? Are there incentives that reward incorrect information? Does reporting a death in the family mean that food ration levels might be reduced?
Monitoring Activities, Inputs, outputs Monitoring is usually continuous – or at least periodic and frequent – and internal and is largely concerned with activities and their immediate results as it is with systems and processes.
Evaluation Outputs – outcomes – impact Evaluation tends to be an episodic – and often external – assessment of performance and can look at the whole of

the results chain from inputs to impact

Activities Implementation The scale, scope, tools and timing of response delivery: e.g. number of distributions, training sessions
Inputs Resources The financial, human and material resources used to deliver the response: e.g. water containers, trainers, physical facilities,
Outputs Products and services Combined results of inputs and activities, e.g. the number of water containers distributed to targeted households, number of participants trained in a specific set of topics / skills etc.
Outcomes (can be positive or negative, direct or indirect, intended or unintended) Change in capacities, behavior, practice, knowledge, attitudes, policies Use of outputs and sustained benefits (or drawbacks), e.g. how many litres of clean water are available/used (for which purpose) in each household; how many participants show evidence of training content (topics, skills) being understood / applied;
Impact (can be positive or negative, direct or indirect, intended or unintended) Change in state, conditions, wellbeing Difference from the original problem situation. At its simplest, impact measurement means asking the people affected, ‘What difference are we making?’: e.g. reduction in the incidence of water-borne disease; evidence that what trainees have learned is having a tangible impact on their livelihoods, etc.
Qualitative information Performance indicators Qualitative information describes characteristic according to quality (as opposed to quantity) and often includes people’s opinions, views and other subjective assessments. Uses qualitative assessment tools, such as focus groups, interviewing key informants, stakeholder mapping, ranking, analysis of secondary data and observation. Qualitative data collection tools require skill to obtain a credible and relatively unbiased assessment. The key question is: do they provide reliable and valid data of sufficient accuracy and confidence ?
Quantitative information Performance indicators Information about the number of things someone is doing, providing or achieving, or the length of those things, or the number of times they happen
Triangulation Consistency between different sets of data Use of three or more sources or types of information to verify assessment
 

 

 

 

 

Checklist

  • Assess CO capacity for monitoring and accountability.
  • Establish monitoring and evaluation systems from the very outset of the emergency response.
  • Use CARE’s Humanitarian Accountability Framework to inform the development of monitoring and evaluation systems.
  • Establish appropriate objectives and indicators at the individual project level as well as the overall emergency programme level during the design phase of the response.
  • Ensure that the monitoring and accountability system in place is capable of delivering real-time information on what is happening in emergency response conditions.
  • Plan for data collection and analysis. Double-check that the information to be gathered is going to give a realistic picture of what is actually happening.
  • Plan for reporting, feedback and use of results.
  • Ensure that the Monitoring and Evaluation Coordinator coordinates data collection and analysis for monitoring purposes across the programme.
  • Employ a range of appropriate and participatory data collection methods.
  • Confirm that all monitoring data collected is analysed and presented in a timely and user-friendly way.
  • Ensure that appropriate managers review and act on monitoring data.
  • Include resources for monitoring and evaluation activities in project budgets.
  • Ensure monitoring includes feedback to communities.