• 1. Why evaluations in MSF?

    Learning from action requires reflection

    MSF is all about action, being fast and reactive is the organisation's strength. Systematic and objective evaluation processes are important opportunities to reflect, explore and capture the many experiences teams have in the challenging context MSF works in. Evaluations are therefore a much needed tool for organisational learning.

    A critical element for improving quality

    Being able to evaluate effectiveness and identify factors for success or failure is critical to further improve the quality of MSFs work. Just as important is the integration of the evaluation results in ongoing operations.

    Giving account of intentions and actions

    Since the La Mancha process* the internal push for more accountability has resulted in a stronger call for evaluations. Evaluations are a means for MSF to improve accountability towards MSF stakeholders at all levels : within each operational centre, in the movement, in the field towards partners and patients, in our home societies vis-à-vis donors, the media and the public.

    The privilege to choose

    Humanitarian aid evaluation has only become a mainstream activity and interest in the last two decades. Until today, evaluations are often driven by major donors. As MSF is financially independent, it still receives very little pressure to evaluate from the outside. Therefore, MSF has the privilege to choose what to evaluate based on internal priorities and learning needs.

    * An internal reflection and review process in 2005/2006

  • 2. How to initiate evaluations?

    Evaluations can be initiated by MSF field teams, the coordination teams or headquarters. Simply contact us for support! The team will help you define your objectives and start the evaluation process.

  • 3. How does the Evaluation Unit work?

    The Evaluation Unit offers the following services:

    • Support writing Terms of Reference
    • Clarification of how and if the issues identified can be evaluated
    • Identification and contracting of evaluators
    • Provision of methodological advice, frameworks and tools
    • Advice for and coaching of evaluators before, during and after evaluations
    • Coordination and organisation of all required (field) trips and meetings
    • Information and communication with all stakeholders about processes and outcomes
    • Ensuring dissemination and debate of evaluation findings


    Created as part of OCG, today the Unit offers its services to the entire MSF movement.

  • 4. Who are the evaluators?

    The Vienna Evaluation Unit has created a pool of potential evaluators that is continuously expanding. Professionals interested in evaluation are invited to send their CVs. Specific requirements are defined for every new evaluation and the Terms of Reference are sent to potential evaluators. Eventually, interviews are conducted in order to select the respective evaluation team.

    The ideal evaluator is an analytical thinker, has excellent communication and writing skills as well as sound knowledge of and/or experience in assessments, research or evaluation, and is – of course – humorous and open-minded.

  • 5. Evaluation Definitions

    Definitions of different learning and accountability exercises for the use in MSF

    Type of excercise




    The assessment, as systematic and objective as possible, of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance, appropriateness, effectiveness (timeliness, reactivity), efficiency, continuity and/or impact.[i]

    Evidence base for important decisions (re-orientation, change). Understand effects / outcomes of (pilot) programs and reasons for success or failure.  Organisational learning and accountability.  



    An assessment of the performance of a project/program, ~evaluation, but by choice less comprehensive; may compromise on the rigour in methodology.

    Rapid overview for the management

    Peer review

    A review that is conducted by a team in an equal position, with comparable know how / experience,

    Learning from peers for program improvement.


    A comparison / synthesis of different evaluations on a common subject in order to draw overall conclusions and lessons learnt and/or compare approach and methodology (and quality) of the evaluations.

    Concise overview / synthesis of findings on a specific subject


    = Lessons learned exercise or After Action Review. Structured working session with key people involved in an (emergency) intervention in order to critically reflect the successes and shortcomings of an intervention. It may be complemented by a preparatory analysis of key issues.

    Rapid documentation of lessons learnt in a particular setting and definition of recommendations to improve next response. Joint/reinforced learning for the team involved.


    Structured reflexion of experiences and lessons learnt at the end of a project.

    Documentation and transfer of knowledge obtained in an intervention.



    Analysis of a situation in order to identify (health and humanitarian) problems, their sources and consequences in order to determine the best way of response. The focus is on current and future needs of the population.

    Evidence base for a decision on an intervention

    Socio-cultural assessment / study

    Looking at the socio-cultural characteristics of a society or group of people. This includes everything that is related to a social group which shares values, norms, knowledge etc. Culture is seen as a complex dynamic process which can change constantly.

    Understanding socio cultural determinants before a project start or mid-term to inform operational decisions on culturally adapted activities.


    Objective assessment of either compliance with applicable statutes and regulations

    („regularity audit“) or the relevance, economy, efficiency, effectiveness („performance audit“)

    Internal or external accountability

    Quality of care assessment

    Cross-sectional study design to assess structural, process and outcome quality of (health) care; primarily on hospital level.

    Comprehensive picture of the quality of care in a health structure (or project), incl.  scores for different performance levels.


    The choice of methodology for each assignment depends on the questions to be answered. In most cases it is a mix of qualitative (In depth interviews, Focus group discussion) and quantitative methodologies (numerical data analysis, surveys, structured observation). 


    [i] Evaluation may be of different kind according to:

    ·         its timing: prior to action (ex ante), during it (mid-term), or after completion (ex post ).

    ·         its doer: the ones involved in the performed action itself (self or internal), semi-external (people from the organisation, but not involved in the program)  or an external  consultants  (external).

    ·         its focus: on accountability (summative) or on learning, improving performance (formative).

    ·         its scope: on project or program level or transversal / thematic  (across projects) on a specific subject


    MSF Evaluation units, 2016