The book aims at showing both the relevance of practical work for the advancement of social theory, and the need for a theoretical attitude - based on a capacity of being surprised - for the strengthening of democratic professionalism and the attainment of sound results.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
"Recent developments in policy evaluation have focused on new notions of process and use or, notably, "influence." But this debate among evaluators on how evaluations are used has been essentially a closed oneevaluators talking only among themselves. The debate has gone on seemingly oblivious to fundamental changes in the intellectual landscape of public management, organizational theory, information technology, and knowledge management. New realities demand a different approach toward evaluation.The current era is characterized by the emergence of an increasingly global set of pressures for governments to perform effectively, not just efficiently, and to demonstrate that their performance is producing desired results. Information technology allows enormous Quantities of information to be stored, sorted, analyzed, and made available at little or no cost. The result for those in the evaluation community is that, while individual evaluations are still conducted and reported upon, they are a rapidly diminishing source of information.In the new environment, ever accelerating political and organizational demands and expectations are reframing thinking about the definition of what, fundamentally, constitutes evaluation and what we understand as its applications. In this twelfth volume in the Comparative Policy Evaluation series, authors from fourteen nations address these issues from multiple vantage points. From Studies to Streams is an essential tool for policymakers, government officials, and scholars interested in the contemporary status of evaluation."--Provided by publisher.
In light of the challenges of the Anthropocene, it is common today for evaluators to propose a systems rather than a program perspective. A program perspective is reproached for its simplicity and linearity, while the systems perspective is praised for its ability to account for complexity and emergence. This article argues that even before the systems perspective appeared on the horizon, the story of program evaluation had been characterized by a confrontation between "simplifiers" and "complexifiers." It enquires into the lessons that the complexifiers of the evaluation of programs have handed on to evaluators who are facing the current challenges of sustainable development. On the other hand, when analyzing what a systems perspective contributes, the article is alert to the risk that a "holistic" view may ignore lessons complexifiers have to offer. This argument is supported by considering the way in which a systems perspective is understood in the practice of evaluation. Conclusions invite readers to overcome a misleading opposition between the two perspectives and indicate possible cross-fertilization across different units of analysis and approaches.
This article discusses the work of Judith Tendler, a development economist, consultant, and evaluator from the perspective of her contribution to evaluation research. It argues for her role as a forerunner of many contemporary evaluative concerns and the enduring vitality of her unconventional views. Her reflective practice is built on the doubt and surprise that emerge during fieldwork. It aimed to surface what worked in adverse conditions, and the lessons that could be learned across development interventions and across different domains. The article hinges around the many consequences of searching for "how projects could have worked well": from challenging accepted theories, to developing specific research methods, and being an ethically oriented evaluator.
In today's turbulent world the ethical contribution of evaluators needs to be strengthened. Evaluators have been constrained making judgments that take account of values and morality by the fact/value dichotomy and other positivist social science precepts. Too often evaluation confines itself to assessing whether objectives set by program designers are achieved. In evaluation, valuing has to support improvement and learning that encourages societal development and equips citizens and stakeholders with knowledge that supports autonomy. This Platform essay argues that past efforts have over-emphasized the ethical conduct of the evaluator. There has been some extension to include broader societal concerns, e.g. the public good and democratic accountability; and more recently considerations of the evaluators' ethical and moral expertise. However, there has been less attention to judging what constitutes good public policy including relevant criteria and their use. Evaluator judgments must of course include facts as well as values – they are intertwined. This opens up the question of how others have understood the pre-requisites of a "moral social science."
This essay is a brief introduction to Ray Pawson's work. His contribution is considered from the viewpoint of the development of the realistic evaluation approach and the in depth study of two pillars: context and mechanism. In particular, the theme of mechanism is addressed in consideration of its broad meaning, and includes mechanisms that at first glance are seemingly counterintuitive, but in fact correspond to the difficult path of program realization
Current debates on impact evaluation have addressed the question 'what works and what doesn't?' mainly focussing on methodology failures when providing evidence of impact. In order to answer that question, this article contrasts different approaches to evaluation in terms of the way they address different kinds of possible failures. First, there is more to be debated than simply methodological failures: there are also programme theory failures and implementations failures. Moreover, not all methodological failures are a simple matter of selection bias. Second, the article reviews issues that have recently been raised within different approaches relative to each failure. For programme theory failure, it is a matter of complexity and providing rival explanations; for implementation failure: how to use guidelines, and how to take context into account; and for methodology failure: how to move from internal to external validity, and to syntheses, within the framework of 'situational responsiveness'. All these issues disclose a terrain for potential exchange between the protagonists of different approaches to impact evaluation.
Recenti dibattiti sulla Evidence-Based Policy hanno discusso le sue ipotesi metodologiche su come stabilire l'evidenza, e cioč la gerarchia della robustezza dei metodi nella valutazione degli effetti dele politiche. Minore attenzione č stata dedicata al modo in cui una politica č concepita all'interno di questo approccio, vale a dire come trattamento somministrato ad un paziente (target), al fine di ottenere uno specifico risultato, con l'obiettivo poi di essere generalizzato. Questa concezione si esprime in un quadro epidemiologico come ambito disciplinare, e presuppone un'analisi controfattuale come gold standard, che ignora il processo decisionale e di attuazione, e il ruolo degli stakeholders. Sia questo modo di concepire la politica, che quello di affrontare le diverse fasi del ciclo delle decisioni politiche sono in contrasto con altre metodologie basate su differenti presupposti disciplinari e teorici, in gran parte provenienti dalle scienze politiche. Gli approcci basati sulla "razionalitŕ sinottica" considerano le politiche come interventi basati su una logica di obiettivi-mezzi-risultati, prendendo in considerazione le tre fasi, come momenti separati, e seguono una valutazione basata sugli obiettivi (goal-oriented evaluation). Gli approcci basati sulla "razionalitŕ incrementale" guardano alle politiche come ad un insieme di relazioni complesse che si sviluppano durante l'implementazione, durante la quale le politiche vengono ri-definite. I diversi approcci sono confrontati sulla base del loro modo di considerare ciň che accade durante il processo decisionale, l'attuazione del programma e la valutazione; qual č il significato di valutare gli effetti (causalitŕ), il loro modo di affrontare il contesto, e il modo in cui consentono la partecipazione dei beneficiari.
The US federalist system has been at the origin of evaluation in many ways: providing a laboratory for experimentation of innovative policies, and requiring evaluation of the growing federal intervention from the War on Poverty onwards. Evaluation approaches have been developed that took into consideration the multiple forms of collaboration activated by the federalist system.This experience offers a benchmark for considering evaluation of EU policies and programmes, as it has been elicited by the complex system of EU governance, encompassing both federalist trends — towards concentration of powers at the centre and towards autonomy at the lower levels. First, the article analyses the effects of the centralistic administrative culture of the EU on the way evaluations are conducted at the different levels. Second, it focuses on how the European value added is assessed in the evaluation of two main mechanisms of EU governance that imply a partnership between different jurisdictions: the Structural Funds programmes and the Open Method of Coordination.To do this, it discusses a growing literature of evaluations that utilize theories of power relations, from multilevel governance and network to principal—agent.