1. Role and responsibilities of monitoring and evaluation in humanitarian programming

Monitoring and Evaluation have a range of purposes in humanitarin programming, but the critical one is this: better outcomes for crisis-affected populations from CARE’s humanitarian programming through accountability and learning.

Monitoring and evaluation help in understanding how the assistance and support that CARE provides to disaster-affected communities affects them. Monitoring and evaluation is therefore a critical part of CARE’s Humanitarian Accountability Framework (HAF). It allows us to compare the results of our humanitarian actions with our strategic intent (e.g. CARE Program Strategy, Humanitarian & Emergency Strategy, Core Humanitarian Standards) with technical standards (such as SPHERE and companions) and with expected outcomes and benchmarks for the response (from response strategy, proposal frameworks etc.). Efficient decision making and evidence based learning heavily depend on the quality and timeliness of monitoring & evaluation.

Many team members will have responsibility for monitoring and evaluation activities in a humanitarian response. Therefore it is important that a member of the response team is assigned to the function of coordinating monitoring and evaluation activities. This is usually a full time position. It is critical that the CO’s M&E unit is closely involved with the response team from the very outset of the humanitarian crisis. Thus ideally, the response M&E team should be led from the onset (including during needs assessments) by the CO’s M&E coordinator. Where this capacity does not exist, it is important for the CO to appoint or recruit a M&E Coordinator for specifically for the humanitarian operation as quickly as possible. During a fast onset or large scale emergency the CARE emergency response roster can identify and mobilize additional capacities especially during the surge and scale-up phases. In certain cases the M&E Coordinator function can be combined with the function of leading Accountability and Learning initiatives – the MEAL (Monitoring, Evaluation, Accountability and Learning) approach (see CARE’s commitment to Quality and Accountability in Humanitarian Programming).

Position Key responsibilities
CO Monitoring and Evaluation Coordinator
  • Ensure an appropriatäämonitoring and evaluation (M&E) system is in place and is functioning satisfactorily. Periodically review and revise the system so that it is adapted appropriately to changing operating contexts.
  • Ensure relevant and timely M&E information is provided in user-friendly formats to key stakeholders, including beneficiary communities, CARE senior management and donors.
  • Provide training and mentoring for CO staff.
  • Act as a focal point to organise and manage monitoring reviews, evaluations and/or After Action Reviews (AARs).
RRT and RED roster M&E Specialists
  • Provide temporary support to the CO to establish baselines and set up M&E systems suited to the operating context.
  • Provide training and mentoring for CO staff.
  • May participate in a monitoring review, evaluation and/or facilitate an AAR.
Emergency Team Leader/Senior Management Team (SMT)
  • Ensure application of CARE’s Humanitarian Accountability Framework.
  • Ensure adequate resources are allocated in project budgets to cover M&E-related activities, including monitoring reviews, external evaluations and AARs.
Lead Member Quality and Accountability Focal Point
  • Monitor implementation of M&E systems for the emergency response and support technical advice where necessary.
Regional Emergency Coordinator
  • Promote and guide quality in the emergency programme, and ensure critical gaps are identified and addressed.
Crisis Coordination Group
  • Determine whether incident is a Type 2, 3 or 4 emergency, in which case the CO is required to fund and organise an AAR.
  • Agree on the need for an external evaluation and/or CARE monitoring visit(s).
CI Coordinator for Quality and Accountability
  • Provide technical support to COs to help them comply with CARE’s humanitarian benchmarks.
  • Support ‘learning in’ (where lessons learned are applied in CARE’s emergency responses) and ‘learning out’ (where lessons learned from new emergencies are captured and shared beyond the CO).
CI Coordinator for Quality and Accountability
  • Provide technical support to COs to help them comply with CARE’s humanitarian benchmarks.
  • Support ‘learning in’ (where lessons learned are applied in CARE’s emergency responses) and ‘learning out’ (where lessons learned from new emergencies are captured and shared beyond the CO).

 

 

Many team members will have responsibility for monitoring and evaluation activities in an emergency response, in particular project managers and field officers who collect data on response activities. It is important that a member of the emergency team is designated overall responsibility for coordinating monitoring and evaluation activities. This is usually the head of the CO’s monitoring and evaluation unit if one exists, and it is critical that they are closely involved with the emergency response team from the very outset of the response. Where this capacity does not exist, it is important that the CO appoint or recruit a Monitoring and Evaluation Coordinator for the emergency response operation as quickly as possible.

The key responsibilities of the Monitoring and Evaluation Coordinator in relation to the emergency response programme are to:

  • help establish appropriate indicators at the outset of the emergency response (drawing on benchmarks, Sphere and other available tools)
  • establish and coordinate monitoring systems including data collection, analysis and review
  • Work closely with the CO Information Manager to prepare specific data collection methods and tools
  • coordinate monitoring activities and inputs required of other team members
  • Anticipate, plan and support reporting requirements
  • ensure information gathered through monitoring activities is shared quickly and in an appropriate format with senior managers so that any problems arising can be addressed
  • organise evaluation activities in line with CARE’s learning policy (refer to Annex 9.1 Policy on Learning for
  • Humanitarian Action).

Monitoring and evaluation advisors are available through RED roster for deployment to emergencies to assist with setting up and training in monitoring and evaluation systems in emergencies.

Annex 9.2        Job Description for Monitoring and Evaluation Coordinator

Terms What is measured Definition
Baseline Indicators at the start of the project Information about the situation a project is trying to affect, showing what it is like before the intervention(s)
Benchmark Standard of achievement A standard of achievement that a project has achieved, which it can compare with other achievements
Bias A tendency to make errors in one direction. For example, are there potential for errors because not all key stakeholder groups have been consulted? Are there incentives that reward incorrect information? Does reporting a death in the family mean that food ration levels might be reduced?
Outcomes Effectiveness Use of outputs and sustained benefits, e.g. how many litres of clean water are available in each household, how many participants show evidence of training being used
Outputs Effort Implementation of activities, e.g. the number of water containers distributed, number of participants trained
Impact Change

(can be positive or negative)

Difference from the original problem situation. At its simplest, impact measurement means asking the people affected, ‘What difference are we making?’  Examples of impact may be a significant reduction in the incidence of water-borne disease or evidence that what trainees have learned is having a tangible impact on project/programme delivery, etc.
Milestone Performance at a critical point A well-defined and significant step towards achieving a target, output, outcome or impact, which allows people to track progress
Qualitative information Performance indicators Qualitative information describes characteristic according to quality (as opposed to quantity) and often includes people’s opinions, views and other subjective assessments. Uses qualitative assessment tools, such as focus groups, interviewing key informants, stakeholder mapping, ranking, analysis of secondary data and observation. Qualitative data collection tools require skill to obtain a credible and relatively unbiased assessment. The key question is: do they provide reliable and valid data of sufficient quantity and quality?
Quantitative information Performance indicators Information about the number of things someone is doing, providing or achieving, or the length of those things, or the number of times they happen
Triangulation Consistency between different sets of data Use of three or more sources or types of information to verify assessment