For Help Contact:
CARE Emergency Group (CEG) Programme Quality and Accountability Coordinator
Tel: +41 22 795 1035

9. Monitoring and Evaluation

Monitoring and Evaluation have a range of purposes in humanitarin programming, but the critical one is ; better outcomes for crisis-affected populations from CARE’s humanitarian programming through accountability and learning.

Monitoring and evaluation help in understanding how the assistance and support that CARE provides to disaster-affected communities affects them. It is therefore a critical part of CARE’s Humanitarian Accountability Framework (HAF). It allows us to compare the results of our humanitarian actions with our strategic intent (e.g. CARE Program Strategy, Humanitarian & Emergency Strategy, Core Humanitarian Standards) with technical standards (such as SPHERE and companions) and with expected outcomes and benchmarks for the response (from response strategy, proposal frameworks etc.). Efficient decision making and evidence based learning heavily depend on the quality and timeliness of monitoring & evaluation.

1.1 Roles and responsibilities of monitoring and evaluation

1.2 Role of the Monitoring and Evaluation Coordinator in emergency team

1.3 Definition of key terms relating to monitoring and evaluation


  • Assess CO capacity for monitoring and accountability.
  • Establish monitoring and evaluation systems from the very outset of the emergency response.
  • Use CARE’s Humanitarian Accountability Framework to inform the development of monitoring and evaluation systems.
  • Establish appropriate objectives and indicators at the individual project level as well as the overall emergency programme level during the design phase of the response.
  • Ensure that the monitoring and accountability system in place is capable of delivering real-time information on what is happening in emergency response conditions.
  • Plan for data collection and analysis. Double-check that the information to be gathered is going to give a realistic picture of what is actually happening.
  • Plan for reporting, feedback and use of results.
  • Ensure that the Monitoring and Evaluation Coordinator coordinates data collection and analysis for monitoring purposes across the programme.
  • Employ a range of appropriate and participatory data collection methods.
  • Confirm that all monitoring data collected is analysed and presented in a timely and user-friendly way.
  • Ensure that appropriate managers review and act on monitoring data.
  • Include resources for monitoring and evaluation activities in project budgets.
  • Ensure monitoring includes feedback to communities.


  • Design an appropriate monitoring and evaluation system for the emergency response.
  • Ensure monitoring and evaluation systems consider all aspects of the response management.
  • Establish appropriate objectives and indicators at the design phase.

3.1 Putting a monitoring and evaluation system in place

3.1.1 Steps in designing a monitoring and evaluation system

3.2 Aspects of the response to consider

3.2.1 What to look for when monitoring an emergency response programme

3.3 Objectives and indicators

3.3.1 Checklist for indicators


  • Coordinate data collection and analysis responsibilities across the programme.
  • Select a range of appropriate and participatory data collection methods.
  • Conduct timely data analysis.
  • Ensure timely management review of monitoring results and correct any issues arising.

4.1 Data collection and analysis responsibilities

4.2 Data collection methods

4.2.1 Participatory data collection methods

4.3 Data analysis

4.3.1 Quality of information

4.4 Management review of monitoring results

4.5 Proposal tracking and documentation

The term ‘accountability monitoring’ is used to mean the monitoring of our performance on accountability as described by the thrid pillar of the CARE Humanitarian Accountability Framework (HAF). In Chapter 32 of the CET you’ll find a more detailed description of CARE’s Quality and Accountability commitments for humanitarian programming.

Accountability monitoring can help CARE to:

  • Check that the accountability systems that have been set up are working effectively.
  • Focus our monitoring on approach, processes, relationships and behaviours, quality of work, satisfaction as well as outputs and activities.
  • Priortise listening to the views of disaster affected people to assess our impact and identify improvements.
  • Provide a feedback opportunity for staff, communities and other key stakeholders to comment on our response and how we are complying with our standards and benchmarks.

Accountability monitoring contributes to CARE’s overall monitoring and evaluation activities. Aspects can be integrated into other project monitoring tools, or carried out as a specific activity e.g. a beneficiary satisfaction survey or FGD (Focus Group Discussion) to solicit feedback and complaints from specific groups amongst the crisis affected population, with specific vulnerabilities or in isolated communities as part of a formal complaints mechanism. Ideally, accountability mechanisms and the monitoring of their effectiveness should be built into project proposals from the outset.

Accountability data (including complaints data) needs to be incorporated into monitoring reporting, alongside monitoring of project progress.

Rapid Accountability Review (RAR)

The Rapid Accountability Review (RAR) is the central tool for accountability monitoring in CARE’s humanitarian programmes.

What is a Rapid Accountability Review?

A RAR is a rapid performance assessment of emergency response against CARE’s HAF that takes place within the first few months of an emergency response. It generates findings and recommendations that are used to make immediate adjustments to the response. It is also a key source for any response review and performance management process.  It usually entails interviews with CARE management, staff, communities and other key external stakeholders, and is led by an independent team leader.

What is the purpose of a RAR?

The overall goal of the RAR is to improve the quality of CARE’s response by assessing its compliance with established good accountability practice. More specifically, the RAR:

  • Provides a real time assessment of HAF compliance early during a humanitarian response
  • Ensures that the views of our key stakeholders are taken into account in making adjustments to our response and in drawing lessons learned.
  • Identifies good practices, highlight gaps (including gaps in capacity)  gaps and areas for improvement
  • Makes recommendations to CARE management (CO, CI and CARE Members) for immediate action related to the ongoing response

When does it take place?

  • Ideally, a RAR is conducted within 2 months of the start of an emergency event, and feeds into the general response review and performance management process
  • A similar process can also be repeated at later stages of response in order to take stock of HAF compliance and improvements made, or to feed into a particular event such as a response evaluation, an emergency strategy review, or EPP event

How to conduct a RAR?

Detailed  information about how to conduct a RAR can be found in the Annex 9.6  RAR-Guidance. 

In summary a Rapid Accountability Review should ideally include:

  • A self-assessment by CARE staff and partners against relevant indicators (see Annex 9.7: staff engagement)
  • Focus group discussions with affected populations (see annexes 9.8a, 9.8b, 9.9,)
  • key informant interviews
  • an synthesis and analysis meeting / workshop to review results of the above and prepare lessons and recommendations for the ongoing response and for an After Action Review (AAR – see section 8. Learning and Evaluation activities) below

Other examples for accountability monitoring can be found at Annex 9.9a Sample of accountability monitoring tools, including:

  • Checklists.
  • Simple questionnaires.
  • Focus group discussion tools.
  • Staff review tools.
  • Monitoring tool to help research into local communities’ views.

The RAR Summary (annex 9.6a) provides a tool that facilitates the synthesis of information collected during the accountability monitoring in an organised way against the 9 commitments of the HAF. It also identifies which key performance criteria are relevant at what stage of the response or for what level of accountability review (light, basic, comprehensive).

Depending on the methodology and format of the RAR there can be different reporting formats. Here are few examples:


CRITICAL: Sensitive Issues

Handling and investigations of sensitive complaints (e.g. fraud, corruption, abusive behavior, sexual exploitation or child abuse) require individuals with specific expertise and must be managed according to the specific procedures and standards defined by the CARE Member responsible for managing the office and programmes. They should be escalated to the designated manager or committee which has the authority to investigate. Separation of Duties and full confidentiality need to be observed at all steps of the process. The protection of whistleblowers, complainants and other people affected must have the highest priority.

Most staff will have experiences of meeting people who are not fully happy with the work or behaviour of CARE or partners in their community or region.  Most of this feedback or complaint is received informally e.g. people approach staff who are visiting the community, or visit CARE’s office in search of assistance or resolution to their problems or grievances. It is also not unusual for staff of one agency to receive a complaint about another agency. Receiving feedback, suggestions and complaints about our work is normal, important and should be welcomed.

Whilst there are occasions where complaints are handled well by field staff, there are many examples when they are not. At times, staff, already overwhelmed with day to day emergency activities, may find it difficult to manage the informal feedback and complaint they receive, might not prioritise complaints, or might forget or lose complaints. Tensions can also arise when a complaint is received about a member of staff and it is not clear how this complaint will be dealt with and by whom.

To improve this, CARE offices should put in place a more formalised system of soliciting, receiving, processing and responding to the feedback and complaints we receive. These systems should aim to provide a safe, non-threatening and easily accessible mechanism that enables even the most powerless to make a suggestion or complaint. On the part of CARE, this requires us to address and respond to all feedback and complaints, and to be timely and transparent in our decisions and actions.

6.1 What is a Feedback and Complaints Mechanism?

[amendable page]

A feedback and complaint mechanism (FCM) is a set of procedures and tools formally established (ideally across programs and linked to other monitoring processes) which:

  1. solicits and listens to, collates and analyzes feedback and complaints from members of the community where CARE works about their experience of an intervention provided by CARE and its partners;
  2. solicits and listens to, collates and analyzes feedback and complaints from partners about their experience of working with CARE;
  3. triggers action, influences decision-making at the appropriate level in the organization and/or prompts a referral to other relevant stakeholders if necessary and appropriate;
  4. provides a response back to the feedback or complaint provider and if appropriate, the wider community.

In some contexts, particularly in humanitarian responses or when working in consortia, an inter-agency or joint mechanism may exist. It is always preferable for CARE and partners to utilize joint mechanisms where they exist rather than setting up a separate FCM. CARE must ensure that joint mechanisms meet CARE’s minimum standards which may require following-up with the lead agency (if not CARE) and may necessitate that the minimum standards are included in agreements with other agencies.

6.2 Key Definitions

[amendable page]

Feedback is a positive or negative statement, a concern or a suggestion on a non-sensitive issue about an intervention provided by CARE or its partners or the behavior of CARE or partner staff.

A complaint is a specific grievance from anyone who has been negatively affected by an organisation’s action or who believes that an organisation has failed to meet a stated commitment. Complaints can be about either non-sensitive issues (such as dissatisfaction with activities) or sensitive issues (such as fraud, corruption, abusive behavior or sexual exploitation).

Feedback and complaints can be shared by any member of the communities where we work, such as project participants, other crisis-affected populations, local traditional or administrative authorities, suppliers and even CARE and partner staff. Crucially, all community members should be able to access the FCM, regardless of age, gender and ability, including the most marginalized, and the FCM should be designed and managed in a way that does not cause harm.

6.3 Why do we need Feedback and Complaints Mechanisms?

[amendable page]

If operated effectively, a feedback and complaint mechanism supports CARE and its partners to meet the organization’s goals, values and commitments by ensuring that:

  • Initial steps are taken towards redressing power imbalances and we are accountable to those we work with and for – by providing opportunities for participants (of all ages, genders and abilities) and partners to participate in and influence decision-making.
  • Our interventions are relevant and appropriate to participants’ needs and aspirations – by identifying changing needs and inappropriate activities and taking appropriate action.
  • Our interventions are implemented in a way which respects communities and protects their well-being and safety – by identifying activities or behavior which are causing harm and taking appropriate action.
  • The integrity of our interventions is upheld –– by identifying situations in which assistance is being diverted for personal or political gain and taking appropriate action.
  • Gender equality and women’s voice are supported – by identifying what is working and not working for women, men, boys and girls and providing opportunities for marginalized community members to voice their opinions and feed into decision-making.
  • Trust with community members is built and maintained – facilitating implementation and creating a solid relationship with the community upon which to intervene at a deeper level in the future.
  • Actual and potential cases of sexual harassment, exploitation abuse are identified and addressed – acting as an early warning system and allowing us to respond and prevent further sexual misconduct or other sensitive issues

6.4 How to set up and operate a Feedback and Complaints Mechanism?

[amendable page]

Experience shows that feedback and complaints mechanisms can have enormous benefits for both communities and for CARE staff.  On the other hand, setting up a mechanism that does not function well (for example if complaints are not followed up) may contribute to frustration and worsening relationships with communities and local stakeholders thus can potentially be harmful.

CARE divides the process of setting up and operating a feedback and complaint mechanism (FCM) into three main stages with specific steps in each stage:

PLAN                     ACT                    IMPROVE

Step 1: COMMIT                              Step 4: DESIGN                               Step 7: RESPOND

Step 2: UNDERSTAND                    Step 5: PROCESS                           Step 8: ADAPT

Step 3: CONSULT                            Step 6: MAKE SENSE                    Step 9: LEARN

These steps do not represent a linear progression but instead each step reinforces the others in a circular fashion. Frequently these steps will be conducted in parallel or previous steps will be revisited with the new understanding gained from the other steps.

See full FCM Guidance for further details about each of those steps.


6.5 Roles & responsibilities

[amendable page]

For each stage and related steps defined in the minimum operating standards, specific roles and responsibilities need to be clearly assigned. Roles and responsibilities should be assigned to appropriate staff members working in the following types of functions:

  • Senior Management are ultimately accountable for the establishment and performance of the FCM, as well as ensuring organisational commitment and encouraging a culture supportive of accountability.
  • The Program Director should be a champion of accountability and hold the Program Team to account for effectively operating the FCM and using data in decision-making. This person is responsible for the inclusiveness and effectiveness of the FCM with a focus on learning and improvement.
  • Program Team / Project Managers provide programmatic-level support to the implementation and operation of the FCM. They should create a demand for feedback and complaints data and use that data in decision-making.
  • The MEAL Manager in the country office (or whoever leads on MEAL) has oversight of the running of the FCM across all areas on a day-to-day basis, including coordinating with field office and partner staff to ensure consistent understanding, to build capacity and provide tools and guidance. This person takes a lead on quality control and data analysis.
  • MEAL/Accountability staff at the field office (which could include Officers or Community Engagement staff) are responsible for the day-to-day operation of the various channels, including directly receiving and processing feedback and complaints.

Sensitive complaints such as sexual exploitation and abuse should be escalated to the designated manager for appropriate handling and response.

When working with partners, CARE’s role should cover the following:

  • Ensuring partners have a clear understanding of CARE’s expectations and minimum standards;
  • Providing technical support, capacity building, resources and tools as required (e.g. training on PSHEA);
  • Providing quality control during field monitoring visits;
  • Regularly reviewing analysis of feedback and complaints data and supporting partners to use this for decision-making;
  • Regularly participating in reviewing the effectiveness of the FCM and contributing to efforts around learning and improvement.

Feedback to communities on our monitoring and evaluation results (including complaints data and results from monitoring of our accountability) should be part of CARE’s overall information sharing to communities. An important part of this is to make reports reader friendly and share them as much as possible with all staff. Provide key complaints data in public places e.g. on websites and community noticeboards.

Providing such information to affected communities in an accurate and timely way is a fundamental ingredient of building trust. Trust is in turn a fundamental ingredient of participation. People will only engage meaningfully with individuals or institutions that they believe they can trust.

The following chart demonstrates how a flow of communication should work in this environment.

See Annex 9.19 – Information provision to affected communities for further detail.


  • Organise an After Action Review.
  • Conduct an evaluation when required.

CARE’s Policy on Evaluations is available at Annex 9.1. This policy highlights CARE’s commitment to learning from humanitarian response with a view to improve our practices and policies for future responses. All CARE COs are required to comply with this learning policy. Support and advice can be provided by CEG for learning activities.

8.1 Organising an After Action Review

8.2 Commissioning and managing an evaluation


Include monitoring and evaluation line items in project budgets.

9.1 Monitoring and evaluation costs

Lessons learned from previous CARE emergency operations have found that CARE COs often lack the capacity to design and implement monitoring and evaluation systems during emergency responses. In particular, COs have difficulty in adapting their regular monitoring and evaluation systems (used for longer-term programming) to more unpredictable emergency situations that are changing rapidly.

As both a relief and development agency, CARE has determined that programme and project standards should apply to all CARE programming, including emergencies, post-conflict rehabilitation and development, whether CARE is directly providing assistance, working with or through partners, or conducting advocacy campaigns. 

CARE’s Quality & Accountability policies & standards should be used to inform the development of monitoring and evaluation systems in order to be consistent with the Core Humanitarian Standards (CHS) as well as the Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief.

CARE’s Evaluation Policy describes CARE’s commitments to using evaluations to promote systematic reflective practice and organisational learning, and accountability to help contribute to significant and sustainable changes in the lives of people we serve.

Active Learning Network for Accountability and Performance in Humanitarian Action

ALNAP 2006. The participation handbook. Oxfam.

ALNAP-Training materials for Evaluation of Humanitarian Action

ALNAP-Summaries of lessons from previous disaster types

Digital Resources for Evaluators

Digital Resources for Evaluators

ECB Project-Accountability and Impact Measurement

ECB Joint Needs Assessment/Evaluation Database-Summaries of key learning from evaluations and AARs

Humanitarian Accountability Partnership 2007. A guide to the HAP Standard.

IFRC-International Federation of Red Cross and Red Crescent Societies 2002. Handbook for monitoring and evaluation.

IFRC-International Federation of Red Cross and Red Crescent Societies 2005. Guidelines for emergency assessment.

ListenFirst - is a draft set of tools and approaches that NGOs can use to make themselves more accountable to the people they serve. It includes a list of 25 examples of putting accountability into practice.

MandE NEWS-Monitoring and Evaluation NEWS