In 2006, at the landmark La Mancha meeting, MSF agreed to seek active transparency and accountability to improve the relevance, effectiveness and quality of its interventions. The result has been an increased investment in developing evaluation capacity in the organisation. 

Assessing potentials and limitations

There are two overriding purposes of evaluation in MSF. Most importantly, evaluations are a tool for assessing the potentials and limitations of medical humanitarian action. Therefore, the evaluation process enhances the effectiveness of the medical humanitarian act. By means of in-depth analysis, evaluations may help explain why some activities are successful while others are not, and this information can be used to improve approaches and methods applied in MSF’s work. 

The other main purpose of evaluation is to provide MSF governance, donors and beneficiaries with documentation about the use and the results of MSF’s work. This way, evaluations contribute to accountability within MSF.

Learning from action requires reflection 

MSF is renowned for its rapid action in emergencies. Being fast and responsive is the organisation's strength. Systematic and objective evaluation processes are important opportunities to reflect, explore and capture the many experiences teams have in the challenging context MSF works in. Evaluations are therefore a much needed tool for organisational learning.  

Humanitarian Evaluation is the systematic and objective assessment of an on-going or completed humanitarian intervention, its design, implementation and results.” (OECD/DAC 2012)

Independent specialised units

Following the La Mancha Agreement and its commitment to accountability in 2006, formal evaluation processes have become an integral part of MSF’s work. In 2005, MSF established an independent and specialised Evaluation Unit in Vienna. Since then, similar units have been set up in Stockholm and Paris. 

Professional competence 

Ideally, MSF evaluations are carried out by a mixed team of external experts and experienced MSF staff, selected on the basis of their professional competence, independence and experience in the relevant field and in conducting evaluations.

Evaluations complement less resource-demanding monitoring activities such as end-of-cycle reports or reviews. Evaluations are primarily field-focused and may cover one or more field projects, strategies or policies. An evaluation can, however, also focus on headquarters projects, strategies, themes or policies.

What is evaluation?

Stephen Shames
Stephen Shames

An evaluation process generally has four phases: Preparation, Implementation, Utilisation and Follow-up.

Preparation

When the general topic of an evaluation has been identified, the Evaluation Unit defines Terms of Reference in conjunction with the evaluation ‘owner’ (the entity requesting the evaluation). The Terms of Reference specify the background for the evaluation, its overall purpose, methodological requirements, the geographical and thematic scope, specific evaluation questions, requirements concerning the composition of the evaluation team etc. The draft Terms of Reference are forwarded to the identified stakeholders for consultation, and their comments are taken into account in the final Terms of Reference.

The Evaluation Unit then releases a call for proposals to both the MSF evaluation network and the general humanitarian evaluation community via prescribed portals. The Evaluation Unit is responsible for contracting the evaluation team; however, the evaluation owner and/or the evaluation steering group also agree on the proposed selection. 

Implementation

Selection of evaluators

Implementation begins with the selection and contracting of an evaluation team, which usually consists of evaluation and/or medical humanitarian experts. The selected team receives background information from the Evaluation Unit and other relevant departments within MSF (e.g. Operational Desks, Medical Department, etc.) and develops an operational plan for the evaluation.

Inception Report

The inception report (or ‘operational plan’) helps to further sensitise the stakeholders of the evaluation and forms the basis for the upcoming field visit(s). It typically involves an extensive review of existing documents; further development of the approach and methodology; field work with interviews, focus groups discussions and/or questionnaire surveys among stakeholders in the field; analysis of collected data; final reporting etc.

Report writing

On the basis of the data analysis, the evaluation team prepares a first draft of the evaluation report, including conclusions and recommendations, which is shared with the Evaluation Unit. The Evaluation Unit reviews the draft in terms of methodological quality and comments on factual information, methods, conclusions and recommendations. The draft is then circulated to relevant stakeholders for feedback.

Prior to finalisation, evaluation outcomes are shared and discussed with as many of the relevant stakeholders and MSF management members as possible at a presentation workshop. Again, the evaluation team considers all comments and feedback received at the presentation workshop when preparing the final draft, but it has the right to draw independent conclusions. The evaluation team has the sole responsibility for the final conclusions of the evaluation.

Roles and responsibilities

The Evaluation Unit serves as manager for the evaluation, either alone or, for joint evaluations, together with one or more of the other evaluation units. For larger evaluations, a steering group is established, with a composition that reflects the topic and purpose of the evaluation. The steering group advises and provides feedback to the evaluation team throughout the entire evaluation process, mainly at the level of evaluation content.

The Evaluation Unit monitors the evaluation process in order to ensure i.a. that the evaluation is undertaken in accordance with the Terms of Reference, MSF’s evaluation guidelines, and other relevant quality standards. The division of responsibility between the Evaluation Unit, the evaluation team and other stakeholders is defined and communicated at the start of every evaluation. 

Further information about how evaluations are implemented can be found in MSF’s evaluation guidelines, which are available at Resources.

Evaluation utilisation and follow-up

Ebola Outbreak in Guinea
Joffrey Monnier/MSF

MSF is committed to the implementation of agreed evaluation outcomes and the systematic follow-up of all evaluations through the relevant departments within the organisation. 

Ownership and utilisation

Ownership of an evaluation is key to a subsequent utilisation of its outcomes. That is why preliminary outcomes are shared and cross-checked with the field team at the end of a field visit, draft reports are shared with the relevant stakeholders for additional feedback and stakeholders and managers have a final opportunity to give input during the presentation workshop. Through these mechanisms, evaluation outcomes are explained and discussed, fostering ownership of the key conclusions and recommendations. 

When an evaluation has been completed, a management response is requested from those who initially commissioned the evaluation – the evaluation ‘owners’.  This ensures that evaluation outcomes are systematically capitalised on.

Internal reflection & change

Dissemination of the evaluation to the broader MSF movement takes place through a systematic report mail-out to the evaluation distribution list consisting of Heads of Missions, operational decision-makers and department managers within MSF. Reports are also posted on the internal associative and executive MSF websites. In addition, a regular Evaluation Newsletter alerts the movement to the latest reports.

Periodically, meta-evaluations are conducted to screen the individual evaluations for repetitive issues of institutional concern. Such meta-reviews also help to maintain an overview of which evaluations have been followed up and how. 

Evaluations thus contribute to knowledge sharing within the movement and help develop and improve overall MSF policies and procedures. It is important to develop and maintain a necessary ‘culture’ of evaluation, which is a prerequisite for meeting the La Mancha aspirations towards internal reflection and accountability.

Further information about MSF’s approach to evaluation of medical humanitarian action, including follow-up procedures, can be found in the MSF Evaluation Manual available under Resources.

Qualitative research

Qualitative methods are commonly used in MSF: for assessing needs, understanding problems or evaluating interventions. However, a lot of scepticism exists regarding the validity of qualitative data, not least because they are often applied with uncertainty and unsystematically.

The Evaluation Units frequently apply qualitative research methods for both evaluation and socio-cultural research. We would like to encourage the use of qualitative methods in MSF programmes. The Vienna Evaluation Unit in particular, offers training sessions on qualitative methods and also provides support for qualitative study design.

Qualitative methods are needed to understand the behaviour and perceptions of affected communities.

Unlike quantitative methods, which provide measurable answers to questions such as how many, how often, etc., qualitative research answers the questions of why and how. The purpose is to learn about aspects of the social world and to generate new understandings that can be used by the social world.

Qualitative information can be useful before, during or after an intervention.

Before the start of a programme, qualitative methods are used to identify needs and vulnerabilities of the target population. During a project, patient satisfaction, health seeking behaviour, reasons for success or failure (in particular cultural dimensions of misunderstandings) can be assessed. Quantitative data may be explained through qualitative methods (reasons for use or non-use, for defaulters, etc.). Evaluations after an intervention frequently require a mix of quantitative and qualitative methods.

Qualitative research takes place in the natural world.

Researchers go to the people wherever possible in order to interview them in their everyday worlds. They try to understand how people make sense of their worlds through multiple interactive and humanistic methods: talking, looking, listening and reading.

Researchers are sensitive to their personal biographies and how they shape their studies.

Qualitative research is interpretive: the qualitative researcher assumes that understanding (analysing and interpreting) and representing (interpreting and writing about) what has been learned are filtered through her/his own personal biography.

Recommendations for further reading

  • Bloor, Michael and Fiona Wood (2006): Keywords in qualitative methods. A vocabulary of research concepts. Sage Publications, New York.
  • Burgess, Robert G. (1984): An introduction to field research. George Allen & Unwin, London, Boston, Sydney.
  • Corbin, Juliet and Anselm Strauss (2008): Basics of qualitative research. Sage Publications, 3rd edition, Los Angeles, New Delhi, Singapore.
  • Coreil, Jeannine (1995): Group interview methods in community health research. In: Medical Anthropology 16,3; pp. 193-210.
  • Flick, Uwe (2009): An introduction to qualitative research. Sage Publications, 4th edition, London.
  • Green Judith and Nicki Thorogood (2004): Qualitative methods for health research. Sage Publications, London.
  • Holloway, Immy ed. (2005): Qualitative research in health care. Open University Press, Berkshire.
  • Janesick, Valerie J. (1998): "Strechting” exercises for qualitative research. Sage Publications, London.
  • Padgett, Deborah K. (2008): Qualitative methods in social work research. Sage Publications, London.
  • Patton, Michael Q. (2002): Qualitative research & evaluation methods. Sage Publications, 3rd edition, Thousand Oaks, California.
  • Rossman, Gretchen B. and Sharon F. Rallis (1998): Learning in the field. An introduction to qualitative research. Sage Publications, London.

Retrospects

Retrospects are an alternative form of learning, offered increasingly by the Stockholm Evaluation Unit.

In contrast to a full evaluation, retrospects are an internal learning exercise designed to capture the learning from a project team, after a piece of work is completed.

Importantly, the retrospect can be a powerful tool for bringing about closure from within a team, which is often the most significant deliverable, especially if the project experience has been difficult  on an emotional or psychological level.

The retrospect meeting lasts from a couple of hours to a couple of days, and is facilitated by a member of the evaluation team, although it is important to note that no independent judgement is made about the topic discussed. As such, retrospects are internal, designed to bring out the key knowledge and experience developed by a project team, and turn it into actions and resources for the benefit of future projects. 

A retrospect meeting revisits the objectives and deliverables of the project, asks what went well and what could have gone better, and why.

A retrospect should normally take place as soon as possible after a project is completed. The duration varies depending on number of people, duration and complexity of the project.

In the final stage, the facilitator produces a short “jargon-free” report synthesising the lessons identified from the retrospect. Again, no judgement is made; the report simply provides a record of the discussion, categorising the learning themes and highlighting any actions to be taken.

Evaluation Reports

Evaluation reports

Anthropological reports

Evaluator space