European agencies have been created at a rapid pace in recent years in a multitude of diverse fields from pharmaceuticals to financial supervision. This book examines how the accountability system of these agencies operates formally and in practice, studying the legal provisions in place and their de facto implementation.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
AbstractArtificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and are transforming our societies. The promise of efficient, low‐cost, or "neutral" solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high‐stakes aspects of our public existence—from hiring and education decisions to the governmental use of enforcement powers (policing) or liberty‐restricting decisions (bail and sentencing)—this necessarily raises important accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision‐making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually guided manner the implications of these systems, and the limitations they pose, for public accountability.
Artificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and are transforming our societies. The promise of efficient, low‐cost, or "neutral" solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high‐stakes aspects of our public existence—from hiring and education decisions to the governmental use of enforcement powers (policing) or liberty‐restricting decisions (bail and sentencing)—this necessarily raises important accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision‐making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually guided manner the implications of these systems, and the limitations they pose, for public accountability.
EU agencies are increasingly subject to a flurry of stakeholder bodies. Despite their prevalence, and the considerable variation in structures formally professed to serve the same purpose, we know little about the actor preferences driving the set-up of such structures or the potential implications of specific institutional design choices. We systematically map structural variations across EU agencies and analyse to what extent the establishment and the design of stakeholder bodies is principal-imposed or agency-initiated. Do stakeholder structures enhance political control serving to broadly legitimise agencies, or to the contrary, do they reflect preferences of the bureaucratic actors they are meant to control, and with what implications? We find that, for the most part, weak principal control and steering leaves it to the agencies themselves to design stakeholder bodies as they see fit. This has the potential to introduce unsanctioned biases in favour of specific groups, potentially depleting rather than bolstering legitimacy. A major implication of EU agencies' stakeholder engagement is that the agency model is currently in flux, moving away from the classic insulated agency towards greater politicization in regulatory policy.
AbstractAccountability is said to be about the management of expectations. Empirical studies reveal considerable variation in organizational interest, intensity, and investment in accountability relationships. Less is known, however, about what explains these observed variations. Drawing on accountability and reputation‐concerned literatures, this article argues that a reputation‐based perspective on accountability offers an underlying logic that explains how account‐giving actors and account‐holding forums actually manage these expectations and how organizations make sense of and prioritize among accountability responsibilities. Reputational considerations act as a filtering mechanism of external demands and help account for variations in degrees of interest in, and intensity of, accountability. The resulting accountability outcomes are coproduced by the reputational investment of both account‐giver and account‐holder, resulting in distinct accountability constellations and outcomes.