3.3 Humanitarian Accountability System

All humanitarian programme monitoring and evaluation is part of CARE’s commitment to accountability and connected to CARE’s HAF Accountability System. The HAF Accountability System is designed to monitor how well CARE performs against both its Humanitarian Quality and Accountability Commitments and its Humanitarian Performance Targets within each emergency response. Furthermore, the system allows individual response performances to be compared with one another, across the globe and over time.

CARE’s Programme Information and Impact Reporting System (PIIRS) and the CHS verification process will support the synthesis and analysis of CARE’s organization wide performance against HAF targets and commitments which will be summarized and presented in the annual Humanitarian Performance Metrics reports.

Key HAF related reporting outputs include:

AT RESPONSE LEVEL:

AT GLOBAL LEVEL:

Monitoring, RAR and AAR outputs as well as the RPS are shared internally within CARE through the Crisis Coordination Group and ERWG in order to allow for immediate management response action. Core data from these sources are also stored in CARE’s database for humanitarian crisis and responses. The database is currently under construction and will ultimately allow the visualization of real time performance data for all stakeholders in CARE for enhanced management efficiency, transparency and mutual accountability.

Humanitarian Performance Metrics reports are compiled each year for the CARE International Senior Leadership Team (Humanitarian & Operations). All Performance Metrics reports, evaluation reports and CHS verification outputs are, for the sake of accountability,  shared via CARE’s International’s Electronic Evaluation Library (EEL). Synthesized results of the CHS verifications will be made public together with the related improvement plans as required by the statutes of the CHS Alliance.

For fuller guidance on all monitoring and evaluation, please refer to the section below on Accountability Monitoring (incl.  Rapid Accountability Reviews – RAR). For AAR reports, Performance Metrics reports and other useful documents, please refer to the Annexes in this chapter.

 

The term ‘accountability monitoring’ is used to mean the monitoring of our performance on accountability as described by the thrid pillar of the CARE Humanitarian Accountability Framework (HAF). In Chapter 32 of the CET you’ll find a more detailed description of CARE’s Quality and Accountability commitments for humanitarian programming.

Accountability monitoring can help CARE to:

  • Check that the accountability systems that have been set up are working effectively.
  • Focus our monitoring on approach, processes, relationships and behaviours, quality of work, satisfaction as well as outputs and activities.
  • Priortise listening to the views of disaster affected people to assess our impact and identify improvements.
  • Provide a feedback opportunity for staff, communities and other key stakeholders to comment on our response and how we are complying with our standards and benchmarks.

Accountability monitoring contributes to CARE’s overall monitoring and evaluation activities. Aspects can be integrated into other project monitoring tools, or carried out as a specific activity e.g. a beneficiary satisfaction survey or FGD (Focus Group Discussion) to solicit feedback and complaints from specific groups amongst the crisis affected population, with specific vulnerabilities or in isolated communities as part of a formal complaints mechanism. Ideally, accountability mechanisms and the monitoring of their effectiveness should be built into project proposals from the outset.

Accountability data (including complaints data) needs to be incorporated into monitoring reporting, alongside monitoring of project progress.

Rapid Accountability Review (RAR)

The Rapid Accountability Review (RAR) is the central tool for accountability monitoring in CARE’s humanitarian programmes.

What is a Rapid Accountability Review?

A RAR is a rapid performance assessment of emergency response against CARE’s HAF that takes place within the first few months of an emergency response. It generates findings and recommendations that are used to make immediate adjustments to the response. It is also a key source for any response review and performance management process.  It usually entails interviews with CARE management, staff, communities and other key external stakeholders, and is led by an independent team leader.

What is the purpose of a RAR?

The overall goal of the RAR is to improve the quality of CARE’s response by assessing its compliance with established good accountability practice. More specifically, the RAR:

  • Provides a real time assessment of HAF compliance early during a humanitarian response
  • Ensures that the views of our key stakeholders are taken into account in making adjustments to our response and in drawing lessons learned.
  • Identifies good practices, highlight gaps (including gaps in capacity)  gaps and areas for improvement
  • Makes recommendations to CARE management (CO, CI and CARE Members) for immediate action related to the ongoing response

When does it take place?

  • Ideally, a RAR is conducted within 2 months of the start of an emergency event, and feeds into the general response review and performance management process
  • A similar process can also be repeated at later stages of response in order to take stock of HAF compliance and improvements made, or to feed into a particular event such as a response evaluation, an emergency strategy review, or EPP event

How to conduct a RAR?

Detailed  information about how to conduct a RAR can be found in the Annex 9.6  RAR-Guidance. 

In summary a Rapid Accountability Review should ideally include:

  • A self-assessment by CARE staff and partners against relevant indicators (see Annex 9.7: staff engagement)
  • Focus group discussions with affected populations (see annexes 9.8a, 9.8b, 9.9,)
  • key informant interviews
  • an synthesis and analysis meeting / workshop to review results of the above and prepare lessons and recommendations for the ongoing response and for an After Action Review (AAR – see section 8. Learning and Evaluation activities) below

Other examples for accountability monitoring can be found at Annex 9.9a Sample of accountability monitoring tools, including:

  • Checklists.
  • Simple questionnaires.
  • Focus group discussion tools.
  • Staff review tools.
  • Monitoring tool to help research into local communities’ views.

The RAR Summary (annex 9.6a) provides a tool that facilitates the synthesis of information collected during the accountability monitoring in an organised way against the 9 commitments of the HAF. It also identifies which key performance criteria are relevant at what stage of the response or for what level of accountability review (light, basic, comprehensive).

Depending on the methodology and format of the RAR there can be different reporting formats. Here are few examples:

 

Checklist

  • Organise an After Action Review.
  • Conduct an evaluation when required.

CARE’s Policy on Evaluations is available at Annex 9.1. This policy highlights CARE’s commitment to learning from humanitarian response with a view to improve our practices and policies for future responses. All CARE COs are required to comply with this learning policy. Support and advice can be provided by CEG for learning activities.

8.1 Organising an After Action Review

CARE’s policy requires COs to hold an AAR for each large-scale (Type 2 and 4) humanitarian crisis. Country Offices responding to smaller (Type 1) crisis are also encouraged to conduct a brief ‘lessons learned’ exercise following the response.

What is an After Action Review (AAR)?

An AAR is an internal performance review and lessons learned exercise that takes place within the first 3-4 months of a crisis response. It usually takes a workshop format and brings together key staff who have been involved in the response from the CO, CARE Lead and other parts of CARE.  It is independently facilitated (i.e. external to the response) and takes into account external and internal feedback collected before the workshop. An AAR draws both positive and negative lessons, and leads to recommendations to CARE management for improving humanitarian response policy and practice.

What is the purpose of an AAR?

The overall goal of the AAR is to contribute to CARE’s understanding of its crisis response performance and to help to promote learning and accountability throughout CARE International. More specifically, the AAR:

  • Provides a space for staff to capture key learning at a critical juncture of a humanitarian crisis response
  • Generates lessons learned that can be shared across CI
  • Makes recommendations to CARE management (CO, CI and CARE Members) for improving humanitarian response policy and practice

 

The following annexes can assist with organising an After Action Review:

Annex 9.10      Practical guidance for organising an AAR

Annex 9.10a      Terms of Reference for AAR facilitator

Annex 9.10b     Sample AAR agenda

Annex 9.10c      Sample AAR report

8.2 Commissioning and managing an evaluation

An evaluation can be defined as ‘…a systematic and impartial examination of humanitarian action intended to draw lessons to improve policy and practice and practice and enhance accountability‘ (ALNAP, 2001). CARE’s current policy states that external evaluations are optional but will usually be carried out in cases where at least one of the following conditions has been met:

  • involves a large-scale commitment of resources
  • has strategic implications for CARE
  • has piloted innovative approaches that could become standard good practice in future emergency responses.

An evaluation should be led by an external, independent facilitator. The terms of reference for the evaluation should describe how the results will be used.

As with the AAR, the CO has the primary responsibility to identify funding, and organise and manage the evaluation. An external evaluation led by a professional evaluator can typically costs USD20,000-35,000. In some circumstances, a joint evaluation with partner agencies may be more appropriate. While an evaluation (with the exception of real-time evaluations) usually don’t take place until several months after the emergency event, there are a few planning and budgeting steps that need to be taken during the early stages of an emergency response. For more details see the following annexes.

Annex 9.14      Sample TOR for an evaluation

Annex 9.15      Sample format for an evaluation report

Annex 9.16      ALNAP’s evaluation quality proforma