Finn Østrup, Jørgen Jørgensen og Jesper Zwisler: Fornyelse eller kollaps? En kritik og gentænkning af offentlig styring. Frederiksberg: Samfundslitteratur (2020), 398 sider, 398 kr. (anmeldt af Hanne Foss Hansen)
Systemic evaluation governance, defined as governance systems anchored in evaluation and focused on developing performance through comparing actors in organisational fields, is increasingly institutionalised at international as well as national levels in recent years. Systemic evaluation governance takes several forms, e.g., the forms of indicator systems, benchmarking, accreditation, certification and initiatives of evidence-based professional practice. On the basis on a discussion of the concept of systemic evaluation governance, the article develops a typology of different forms of this specific governance type and analyses and discusses the logics and premises these are anchored in. The analysis is illustrated by empirical examples.
This article analyses the content and processes of reforms in the university sector in Denmark. It reveals radical reforms combining governance reforms, research policy reforms and educational policy reforms anchored in New Public Management ideas. The reforms introduce values that are alien to prevailing university values. They change decision-making processes and may have problematic constitutive effects on academic practice. The challenge to political science lies in the difficulty of documenting accountability, while still meeting the demand for economic value. Adapted from the source document.
The purpose of this article is to shed light on the twinned development of public‐sector reforms and evaluation in Denmark. While there is a substantial literature on both public‐sector reforms and on evaluation, focus has only recently been put on the relations between these phenomena. The case of Denmark has, however, not been included in the international comparison. This article therefore discusses the case of Denmark, comparing it briefly to Finland, Norway and Sweden. The article analyses both 'evaluation in' and 'evaluation of' public‐sector reform. 'Evaluation in' refers to the question to what extent evaluation (in line with other elements such as, e.g. privatization and performance management) has been part of the content of reforms. 'Evaluation of' refers to the questions of whether and how reforms themselves have been evaluated. The analysis shows that although the rhetoric of reforms have been similar in the Nordic countries, reform actions and especially the twinned development of reforms and evaluation have differed between the countries.
A variety of different evaluation models are presented in the evaluation literature. These mostly fall into the following categories: results models, process models, system models, economic models, actor models, and programme theory models. This raises the question:'how can evaluation sponsors and evaluators decide how to design an evaluation with so many models to choose from?' In this article, several - mutually incompatible - recommendations are discussed. Design should be determined by the purpose of the evaluation, the object of evaluation or the problem to be solved by the evaluated programme or agency. The recommendations are based on different rationales: goals-means, context-based values and programme theory. Furthermore, in practice other logics may influence the evaluation design processes. Four hypotheses concerning such logics are proposed: negotiation, appropriateness,'routine' and projection of competence.
In Denmark, the public sector is currently undergoing modernization. New organizational structures, new forms of control and changed economic conditions are the challenges to public organizations in the 1980s. In this article the barriers to the implementation of modernization are discussed in relation to the higher‐educational system. The conclusion is that the most prominent barrier to the modernization process is the planning philosophy of general public‐sector policy and educational policy of the 1970s, and not institutional resistance. Thus, implementation processes need to be discussed not only from top‐down and bottom‐up perspectives but also from a more horizontal perspective.
The evidence movement and the idea of systematic reviews, defined as summaries of the results of already existing evaluation and research projects, have gained considerable support in recent years as many international as well as national evidence-producing organizations have been established. This article analyses how the idea is practised in the areas of health, social welfare and education and shows that evidence-producing organizations work differently. Some subscribe to the hierarchy of evidence, others to a typology of evidence. The consequences of these variations are discussed.
The local adaptation of research evaluation at faculty and departmental level is analysed in terms of the processes through which an evaluation standard is stipulated and enacted. Three aspects of the interplay of the standard with the local evaluation context are discussed: conflict regulation; activity coordination; and processing of local problems and issues. A 'soft' standard appears as a compromise between different interested parties, which allows for some coordination between loosely coupled actors, and which functions as an open garbage can for departmental problems and issues. The evaluation standard is discussed in relation to local sense-making processes, the introduction of 'managerialism' in academia, and organization development and exploration.
Evaluation as an organizational element has spread across countries and organizational fields. Yet little attention has been paid to the comparative analysis of adoption, construction and implementation processes of evaluation within different organizational fields. The article is based on an empirical study of research evaluation and evaluation of education within the Danish university sector. It conceptualizes and explains similarities and differences in adoption processes and constructed evaluation standards within the two subfields. Three models are used for explaining differences and similarities: a stakeholder model, an institutional field model and an institutional heritage model. The article shows how evaluation as an organizational element is used simultaneously for processes of change and processes of reproduction of norms and values.
In the 1990s and 2000s, the evidence wave in evaluation practice gained momentum first in medicine later in other policy fields. The evidence idea inspired actors in Scandinavia where evidence producing organizations were established. The aim of our study is to better understand these organizations and explore their characteristics in Denmark and Norway. We compare their organizational structures, stance towards ministries, resources and governance frameworks and analyse organizational change across time. Although the ideas of evidence-based policy and practice are common in the two countries, the organizations and context for evidence producing activities differ. Furthermore, in policy fields outside medicine the evidence movement is struggling to maintain support and resources.