Pomimo mnogości prowadzonych badań i analiz ich faktyczne wykorzystanie w projektowaniu i wdrażaniu polityki publicznej jest dość ograniczone. Najnowsze badania wskazują, że skuteczną strategią wzmacniania wykorzystania wyników badań w praktyce polityki publicznej jest brokering wiedzy. Artykuł przedstawia użycie innowacji dydaktycznej umożliwiającej nauczanie brokeringu wiedzy poprzez praktykę – za pomocą szkolenia opartego na grze symulacyjnej. Dotychczasowe doświadczenia z zastosowania gry Brokerzy wiedzy do nauczania analityków polityki publicznej z Polski, Stanów Zjednoczonych i Kanady potwierdzają, że gra pomaga w: (1) zrozumieniu roli wyników badań i analiz w polityce publicznej, (2) opanowaniu sześciu kluczowych umiejętności brokera wiedzy oraz (3) zrozumieniu ograniczeń brokera we wpływaniu na procesy decyzyjne. Instytucje administracji publicznej mogą wykorzystać Brokerów wiedzy do praktycznego kształcenia swoich kadr analitycznych i podnoszenia swojego organizacyjnego potencjału do prowadzenia polityki publicznej opartej na dowodach.
Despite the plethora of conducted research and analyses, their actual use in the design and implementation of public policies is quite limited. The latest research indicates that an effective strategy to strengthen the use of public policy research results in practice is knowledge brokering. The article demonstrates the use of an educational innovation enabling knowledge brokering teaching through practice - by means of a simulation game-based workshop. The past experience connected with the use of "Knowledge 176 Karol Olejniczak, Dominika Wojtowicz Studia z Polityki Publicznej Brokers" game to teach public sector professionals in Poland, the United States and Canada has confirmed that the game helps to: (1) understand the role of the public policy research and analysis results, (2) master the six key skills of knowledge brokers and (3) understand the broker's limitations in influencing the decision-making process. Public administration institutions can make use of "Knowledge Brokers" for the practical training of their analytical personnel and raising its organizational capacity to carry out public policies based on evidence.
"Evaluation in the Post-Truth World explores the relationship between the nature of evaluative knowledge, the increasing demand in decision-making for evaluation and other forms of research evidence, and the post-truth phenomena of anti-science sentiments combined with illiberal tendencies of the present day. Rather than offer a checklist on how to deal with post-truth, the experts found herein wish to raise awareness and reflection throughout policy circles on the factors that influence our assessment and policy-related work in such a challenging environment. Journeying alongside the editor and contributors, readers benefit from three guiding questions to help identify specific challenges but tools to deal with such challenges: How are policy problems conceptualized in the current political climate? What is the relationship between expertise and decision-making in today's political circumstances? How complex has evaluation become as a social practice? Evaluation in the Post-Truth World will be benefit evaluation practitioners at the program and project levels, as well as policy analysts and scholars interested in applications of evaluation in the public policy domain"--
"Evaluation in the Post-Truth World explores the relationship between the nature of evaluative knowledge, the increasing demand in decision-making for evaluation and other forms of research evidence, and the post-truth phenomena of anti-science sentiments combined with illiberal tendencies of the present day. Rather than offer a checklist on how to deal with post-truth, the experts found herein wish to raise awareness and reflection throughout policy circles on the factors that influence our assessment and policy-related work in such a challenging environment. Journeying alongside the editor and contributors, readers benefit from three guiding questions to help identify specific challenges but tools to deal with such challenges: How are policy problems conceptualized in the current political climate? What is the relationship between expertise and decision-making in today's political circumstances? How complex has evaluation become as a social practice? Evaluation in the Post-Truth World will be benefit evaluation practitioners at the program and project levels, as well as policy analysts and scholars interested in applications of evaluation in the public policy domain"--
La pratique de l'évaluation est essentielle pour la responsabilisation et l'apprentissage des administrations qui mettent en œuvre des politiques complexes. Cet article explore les relations entre les structures des systèmes d'évaluation et leurs fonctions. Les conclusions sont basées sur une analyse comparative de six systèmes nationaux chargés d'évaluer la politique de cohésion de l'Union européenne. L'étude identifie trois types de structures de système d'évaluation : centralisées avec une seule unité d'évaluation, décentralisées avec un organe de coordination et décentralisées sans organe de coordination. Ces systèmes diffèrent en termes d'orientation thématique des évaluations et d'utilisateurs ciblés. Les systèmes décentralisés se concentrent sur les utilisateurs internes des connaissances et produisent principalement des études opérationnelles ; leur fonction principale est l'apprentissage orienté vers l'intérieur pour une mise en œuvre harmonieuse du programme. Les systèmes centralisés remplissent une fonction plus stratégique, et tiennent compte du public externe et de la responsabilité externe des effets. Remarques à l'intention des praticiens Les praticiens qui conçoivent des systèmes d'évaluation multi-organisationnels doivent garder à l'esprit que leur structure et leurs fonctions sont interdépendantes. Si l'on vise à la fois l'imputabilité et l'apprentissage, le système d'évaluation a besoin d'un degré minimum de décentralisation d'une part, et de la présence d'un organe de coordination actif et indépendant d'autre part.
AbstractIn 2017, the U.S. Commission on Evidence‐Based Policymaking recommended that federal agencies produce strategic plans focused on research and evaluation, referred to as learning agendas. This requirement was later incorporated into the Foundations for Evidence‐Based Policymaking Act of 2018 (Evidence Act) for the 24 largest federal agencies. Prior to the Evidence Act, only a few federal agencies had experimented with learning agendas, a relatively new concept in the evaluation literature. Learning agendas hold potential for supporting organizational strategic planning that focuses on the generation of relevant knowledge for decision‐makers, organizational leaders, and stakeholders. An inclusively‐ and strategically‐developed learning agenda provides a list of important questions as well as plans for addressing the questions, balancing the interests, informational needs, and time horizons for different organizational decision‐makers. We draw upon the policy design and the evaluation capacity building literature, our analysis of existing learning agendas, and interviews with federal evaluation leaders who guided their development to describe how the process of developing a learning agenda can support intentional learning and impactful evaluation practice within public agencies. Our work should contribute to the development of both theory and practice regarding the implementation of the new expectation to produce learning agendas in federal agencies that contribute to the increased use of evaluation and evidence in policymaking.
Evaluation practice is vital for the accountability and learning of administrations implementing complex policies. This article explores the relationships between the structures of the evaluation systems and their functions. The findings are based on a comparative analysis of six national systems executing evaluation of the European Union Cohesion Policy. The study identifies three types of evaluation system structure: centralized with a single evaluation unit, decentralized with a coordinating body and decentralized without a coordinating body. These systems differ in terms of the thematic focus of evaluations and the targeted users. Decentralized systems focus on internal users of knowledge and produce mostly operational studies; their primary function is inward-oriented learning about smooth programme implementation. Centralized systems fulfil a more strategic function, recognizing the external audience and external accountability for effects.Points for practitionersPractitioners who design multi-organizational evaluation systems should bear in mind that their structure and functions are interrelated. If both accountability and learning are desired, the evaluation system needs at least a minimum degree of decentralization on the one hand and the presence of an active and independent coordination body on the other.
Evaluation units, located within public institutions, are important actors responsible for the production and dissemination of evaluative knowledge in complex programming and institutional settings. The current evaluation literature does not adequately explain their role in fostering better evaluation use. The article offers an empirically tested framework for the analysis of the role of evaluation units as knowledge brokers. It is based on a systematic, interdisciplinary literature review and empirical research on evaluation units in Poland within the context of the European Union Cohesion Policy, with complementary evidence from the US federal government and international organizations. In the proposed framework, evaluation units are to perform six types of brokering activities: identifying knowledge users' needs, acquiring credible knowledge, feeding it to users, building networks between producers and users, accumulating knowledge over time and promoting an evidence-based culture. This framework transforms evaluation units from mere buyers of expertise and producers of isolated reports into animators of reflexive social learning that steer streams of knowledge to decision makers.