What we do to promote quality and accountability?

In 2006, at the landmark La Mancha meeting, MSF agreed to seek active transparency and accountability to improve the relevance, effectiveness and quality of its interventions. The result has been an increased investment in developing evaluation capacity in the organisation. 

Evaluation

“Humanitarian Evaluation is the systematic and objective assessment of an on-going or completed humanitarian intervention, its design, implementation and results.” (OECD/DAC 2012) 

Evaluations provide an appropriate means to assess the quality of our operations not only in terms of medical and operational standards but also with respect to our core humanitarian and medical mandate and principles. Additionally, evaluations are a means for MSF to ensure more transparency and accountability at different levels: internally at movement level, or externally in the field with partners and patients, in our home society towards our supporters and partners (ie, donors, public and media).   

Systematic and objective evaluation helps to reflect, explore and capture the many experiences teams have in the challenging context MSF works in and are a much needed tool for organisational learning.   

Quality

Evaluations are a tool for assessing the quality, including potentials and limitations, of medical humanitarian action thereby working to enhance it. By means of in-depth analysis, evaluations may help explain why some activities are successful while others are not, and this information can be used to improve approaches and methods applied in MSF’s work.

Accountability

The other main purpose of evaluation is to provide MSF patients, partners, governance, and donors with documentation about the use and the results of MSF’s work. This way, evaluations contribute to accountability within MSF.  

SF must continuously ask itself: “was it valuable, was it good, was it successful, were there any challenges, was it relevant?” in order to account for what has been done to learn from, in order to do better and to take better decisions. Only after such judgment has been made, can we reflect and analyse what was valuable, why and how it was valuable, in order to learn.  

MSF does not receive funding from bilateral donors in Europe or the US, so our evaluations are very rarely motivated by such external accountability. Yet, evaluations are open and available externally and thus a way in which we realize our commitment to transparency, underscoring our accountability towards external stakeholders including patients and the community, partnering organizations, as well as donors.  

How we evaluate

MSF has established independent and specialised Evaluation Units in Vienna (2005), in Stockholm (2010) and in Paris (2009). Since their launch, the three units have developed in different ways and today do not function identically. The way to manage evaluations described below applies mainly to the SEU and VEU. OCP’s RIDER, meanwhile, is mostly engaged in producing Lessons Learned exercises (see below).  

When it comes to evaluations, specifically, the Evaluation Unit serves as manager for the evaluation, either alone or, for joint evaluations, together with one or more of the other evaluation units. Often, a steering or consultation group is established, with a composition that reflects the topic and purpose of the evaluation. This group advises and provides feedback to the evaluation team throughout the entire evaluation process, mainly at the level of evaluation content. The evaluation owner, or commissioner, also forms part of this group. A specific focal point can be identified to provide additional support to help the evaluation team navigate resources and contacts.  

The Evaluation Unit monitors the evaluation process in order to ensure that the evaluation is undertaken in accordance with the Terms of Reference, MSF’s evaluation guidelines, and other relevant quality standards. The division of responsibility between the Evaluation Unit, the evaluation team and other stakeholders is defined and communicated at the start of every evaluation.  

Further information about how evaluations are implemented can be found in MSF’s evaluation guidelines, which are available at Resources. 

Evaluators

MSF evaluations are carried out by experts, selected on the basis of their professional competence, independence, and experience in the relevant field and in conducting evaluations as per the needs for the specific evaluation. These can be internal or external to MSF. In some cases, external experts work closely with experienced MSF staff. 

What does MSF evaluate?

Evaluations complement ongoing monitoring work as well as less resource-demanding activities such as end-of-cycle reports or reviews. Evaluations are primarily field-focused and may cover entire projects or a component thereof, strategies or policies. An evaluation can also cover headquarters projects, strategies, themes or policies.

Stephen Shames
Stephen Shames

How does MSF evaluate?

MSF most commonly implement evaluations in six steps, through three major phases: Preparation (scoping, preparation), Implementation (inception, data collection, report writing), Utilisation and Follow-up (use, dissemination). The different units apply the steps with a similar approach, though some variances will exist – both between the units and each individual evaluation.

Preparation

When the general topic of an evaluation has been identified, the Evaluation Unit defines Terms of Reference in conjunction with the evaluation commissioner (the entity requesting the evaluation) and other key stakeholder. The Terms of Reference specify the background for the evaluation, its overall purpose, intended use, suggested methodological requirements, the geographical and thematic scope, specific evaluation questions, and requirements concerning the composition of the evaluation team. The draft Terms of Reference are forwarded to the identified stakeholders for consultation, and their comments are integrated in the final Terms of Reference.

The Evaluation Unit then releases a call for proposals to both the MSF evaluation network and the general humanitarian evaluation community via prescribed portals. The Evaluation Unit is responsible for contracting the evaluation team. 

Implementation

Selection of evaluators

Implementation begins with the selection and contracting of an evaluation team, which usually consists of evaluation and/or medical humanitarian experts. The selected team receives background information from the Evaluation Unit and other relevant departments within MSF (e.g. Operational Desks, Medical Department, etc.) and develops an operational plan for the evaluation, as a part of the inception report.

Inception Report

The inception report helps to further sensitise the stakeholders of the evaluation and forms the basis for the upcoming field visit(s). It typically involves a review of existing documents and some initial interviews; further development of the approach, methodology and methods; plans for data collection with interviews, focus groups discussions and/or surveys, potential limitations and ethical considerations.

Report writing

On the basis of the data analysis, the evaluation team prepares a first draft of the evaluation report, presenting findings and conclusions, which is shared with the Evaluation Unit. The Evaluation Unit reviews the draft and then circulates it to relevant stakeholders (with the consultation group, if one has been established) for feedback.

Prior to finalisation, evaluation outcomes are shared and discussed with the relevant stakeholders at a meeting. Recommendations are either drafted by evaluators and discussed at such a meeting or co-created with the relevant stakeholders. Again, the evaluation team considers all comments and feedback received at the presentation workshop when preparing the final draft, but it has the right to draw independent conclusions. The evaluation team has the sole responsibility for the final conclusions of the evaluation.

Evaluation utilisation and follow-up

Ebola Outbreak in Guinea
Joffrey Monnier/MSF

MSF is committed to the implementation of agreed evaluation outcomes and the systematic follow-up of all evaluations through the relevant departments within the organisation. 

Dissemination of the evaluation to the broader MSF movement takes place through a systematic report mail-out to the evaluation distribution list. Reports are also posted on the internal associative and executive MSF websites.  Webinars can be organized to present evaluation findings.  

When an evaluation has been completed, a management response can be prepared. This ensures that evaluation outcomes are systematically capitalised on.

Internal reflection & change

Ownership of an evaluation is key to a subsequent utilisation of its outcomes. That is why preliminary outcomes are shared and cross-checked with the field team at the end of a field visit, draft reports are shared with the relevant stakeholders for additional feedback and stakeholders and managers have a final opportunity to give input during the presentation workshop. Through these mechanisms, evaluation outcomes are explained and discussed, fostering ownership of the key conclusions and recommendations.

In addition to project-based use evaluations contribute to knowledge sharing within the movement and help develop and improve MSF policies and procedures. It is important to develop and maintain a necessary ‘culture’ of evaluation, which is a prerequisite for meeting the La Mancha aspirations towards internal reflection and accountability.

Further information about MSF’s approach to evaluation of medical humanitarian action, including follow-up procedures, can be found under Resources.

Lesson Learned

There are several definitions of Lesson Learned. According to how MSF’s RIDER (based at OCP) applies the practice, Pierre de Zutter’s definition is the clearest and most accurate description "it is the passage from experience to shareable knowledge".

As with evaluation, lesson learned can be defined as analysis of a past or present intervention, conducted from a distance in a more or less collective manner, according to a specific questioning or problematization, and supported by an objectivation process mobilizing qualitative and/or quantitative methodologies. As opposed to evaluation, Lesson Learned exercises do not necessarily seek to produce a judgement. 

Lesson Learned can be led by practitioners involved in the action, (implying a critical reflection approach to their own practices). 

How does MSF conduct Lessons Learned exercises? 

The following is based on the approach implemented by MSF OCP’ RIDER and the process can vary when led by other evaluation units.  

The first step is to formulate an initial question that explains the context, the primary question the proposed study would address, who the target audience would be, how the results would be used, as well as suggested restitution formats and dissemination strategies.  

Then, RIDER’s members discuss and clarify the question and the objectives with the project initiator. Is it a question about the overall context (What is happening? What is the situation?), or about MSF practices (for example: Did we do well? How we might do better…))? Or is it to document an experience (for example: a specific project, innovation, practice, or incident ...) transform it into shareable knowledge? To justify or promote a decision, a practice, a policy? To evaluate an innovation in real time?   

The goal of this preliminary phase is to test the relevance of the question and to discuss with the initiator the best way to answer it: a more in-depth multi-disciplinary project, or perhaps a “lighter” process (organized debriefing, brainstorming, workshop, literature review, synthesis of archival documents, etc.).  

If, after reflection, the project initiator wishes to undertake a full study, (s)he will, with the RIDER members, write a work plan that includes:  

  • The objectives and questions being asked;  
  • Recommended methods and required resources, including methodological know-how, level of involvement of RIDER, human resource needs, and involvement of the project initiator in project management and implementation….  
  • Strategies for dissemination and use of results: reporting formats (including written reports, workshops, slide decks, other audiovisual), dissemination policy (including internal, donors, partners…), change management strategies (for i.e. support system for recommendations).  

 

Recommendations for further reading 

  • Capitalisation d’expériences...expérience de capitalisations Comment passer de la volonté à l’action ? Philippe Villeval (Handicap International) et Philippe Lavigne Delville (Gret), octobre 2004 

Anthropological Studies

MSF has expertise in Anthropological studies, based on Qualitative methodologies.

The Evaluation Units frequently apply qualitative research methods for both evaluation and socio-cultural research. We would like to encourage the use of qualitative methods in MSF programmes. The Vienna Evaluation Unit in particular, offers training sessions on qualitative methods and also provides support for qualitative study design.

Qualitative methods are needed to understand the behaviour and perceptions of affected communities.

Unlike quantitative methods, which provide measurable answers to questions such as how many, how often, etc., qualitative research answers the questions of why and how. The purpose is to learn about aspects of the social world and to generate new understandings that can be used by the social world.

Qualitative information can be useful before, during or after an intervention.

Before the start of a programme, qualitative methods are used to identify needs and vulnerabilities of the target population. During a project, patient satisfaction, health seeking behaviour, reasons for success or failure (in particular cultural dimensions of misunderstandings) can be assessed. Quantitative data may be explained through qualitative methods (reasons for use or non-use, for defaulters, etc.). Evaluations after an intervention frequently require a mix of quantitative and qualitative methods.

Qualitative research takes place in the natural world.

Researchers go to the people wherever possible in order to interview them in their everyday worlds. They try to understand how people make sense of their worlds through multiple interactive and humanistic methods: talking, looking, listening and reading.

Researchers are sensitive to their personal biographies and how they shape their studies.

Qualitative research is interpretive: the qualitative researcher assumes that understanding (analysing and interpreting) and representing (interpreting and writing about) what has been learned are filtered through her/his own personal biography.

    Recommendations for further reading

    • Bloor, Michael and Fiona Wood (2006): Keywords in qualitative methods. A vocabulary of research concepts. Sage Publications, New York.
    • Burgess, Robert G. (1984): An introduction to field research. George Allen & Unwin, London, Boston, Sydney.
    • Corbin, Juliet and Anselm Strauss (2008): Basics of qualitative research. Sage Publications, 3rd edition, Los Angeles, New Delhi, Singapore.
    • Coreil, Jeannine (1995): Group interview methods in community health research. In: Medical Anthropology 16,3; pp. 193-210.
    • Flick, Uwe (2009): An introduction to qualitative research. Sage Publications, 4th edition, London.
    • Green Judith and Nicki Thorogood (2004): Qualitative methods for health research. Sage Publications, London.
    • Holloway, Immy ed. (2005): Qualitative research in health care. Open University Press, Berkshire.
    • Janesick, Valerie J. (1998): "Strechting” exercises for qualitative research. Sage Publications, London.
    • Padgett, Deborah K. (2008): Qualitative methods in social work research. Sage Publications, London.
    • Patton, Michael Q. (2002): Qualitative research & evaluation methods. Sage Publications, 3rd edition, Thousand Oaks, California.
    • Rossman, Gretchen B. and Sharon F. Rallis (1998): Learning in the field. An introduction to qualitative research. Sage Publications, London.

    Evaluation Reports

    Evaluation reports

    Anthropological reports

    Evaluator space