Evaluating peacebuilding: not yet all it could be
In: Advancing conflict transformation: the Berghof Handbook II, S. 459-482
"Whether an engineer, a doctor, a politician or a mother, everyone practices evaluation every day of their life. One evaluates whether the rice has finished cooking, whether the proposed law will address the problem at hand, which diagnostic tests to do in response to the patient's symptoms and whether the bridge design is adequate for extreme weather conditions. Each of these evaluative acts has a purpose in mind, requires information, and assesses that information against the context and against standards (explicitly and implicitly), in order to catalyze an action. Although everyone has this inherent familiarity with evaluation, in a professional setting it often becomes rife with misunderstandings, complexities and challenges. Therefore, it is useful to start with first principles - what is evaluation? Definitions abound within the professional evaluation field, with many of the major evaluation theorists having developed their own variations (Patton 2008; Rossi et al. 1999). Almost all of these have at their core a set of common characteristics: the systematic nature of the process, competent data collection methodology, and assessment or valuing of the findings. Overlaying these characteristics with peacebuilding, one can define evaluation for our purposes as the use of social science data collection methods (including participatory processes) to investigate the quality and value of programming that addresses the core driving factors and actors of violent conflict or supports the driving factors and actors of peace (Church 2008). In this definition, 'quality' refers to the caliber of the implementation; including the conflict analysis from which a peacebuilding strategy is derived, the planning as well as the implementation process (a blend of logistics, tactics and peacebuilding technique). 'Value', on the other hand, inquires into the changes associated with the intervention and their significance to the target population in terms of stopping violence or building peace. Both quality and value are essential components of program evaluation. This chapter explores the state of the art of evaluation in peacebuilding. After reviewing recent developments (section 2) and current practice (section 3), it proposes that peacebuilding evaluations are generally not delivering accountability and learning in the manner in which they should for two primary reasons. First, the average evaluation is not grounded in the basics of good evaluation practice. Significantly more attention is given to responding to peacebuilding's perceived `distinctiveness' and the challenges this distinctiveness raises than to ensuring that the basics are covered (see section 4). The second reason is that the core drivers of evaluation - accountability and learning - are rarely held at the heart of the process (see section 5). Section 6 gives some recommendations for improving evaluation in the peacebuilding field, followed by a short conclusion. Methodological challenges, of which there are many, are not covered in this chapter. While the field needs to address these challenges in a thoughtful manner, the issues of quality and the motivations behind evaluation are at the foundation of evaluation practice. No improvements in methodology will fundamentally change the contribution of evaluations if these issues are not adequately addressed." (excerpt)