Advanced AI Governance: A Literature Review of Problems, Options, and Proposals
In: AI Foundations Report 4
20 Ergebnisse
Sortierung:
In: AI Foundations Report 4
SSRN
In: AI Foundations Report 2
SSRN
In: AI Foundations Report 3
SSRN
In: PhD Dissertation
SSRN
In: In: Justin Bullock, Baobao Zhang, Yu-Che Chen, Johannes Himmelreich, Matthew Young, Antonin Korinek & Valerie Hudson (eds.). Oxford Handbook on AI Governance (Oxford University Press, 2022 forthcoming).
SSRN
SSRN
In: Journal of international humanitarian legal studies, Band 10, Heft 1, S. 129-157
ISSN: 1878-1527
Amidst fears over artificial intelligence 'arms races', much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems (laws). Yet 'killer robots' hardly exhaust the potentially problematic capabilities that innovation in military AI (mai) is set to unlock. Governance initiatives narrowly focused on preserving 'meaningful human control' over laws therefore risk being bypassed by the technological state-of-the-art. This paper departs from the question: how can we formulate 'innovation-proof governance' approaches that are resilient or adaptive to future developments in military AI? I develop a typology for the ways in which mai innovation can disrupt existing international legal frameworks. This includes 'direct' disruption – as new types of mai capabilities elude categorization under existing regimes – as well as 'indirect' disruption, where new capabilities shift the risk landscape of military AI, or change the incentives or values of the states developing them. After discussing two potential objections to 'innovation-proof governance', I explore the advantages and shortcomings of three possible approaches to innovation-proof governance for military AI. While no definitive blueprint is offered, I suggest key considerations for governance strategies that seek to ensure that military AI remains lawful, ethical, stabilizing, and safe.
In: Contemporary security policy, Band 40, Heft 3, S. 285-311
ISSN: 1743-8764
In: Journal of International Humanitarian Legal Studies 10, no. 1 (2019): 129–57. https://doi.org/10.1163/18781527-01001006.
SSRN
In: AI and ethics, Band 1, Heft 3, S. 261-271
ISSN: 2730-5961
AbstractRecent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize 'near-term' concerns and those focusing on 'long-term' concerns and corresponding policy measures. In this paper, we seek to examine this alleged 'gap', with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an 'incompletely theorized agreement' to bridge some underlying disagreements, in the name of important cooperation on addressing AI's urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.
In: AI Foundations Report 1
SSRN
In: Futures, Band 126, S. 102672
In: Futures 126 (February 1, 2021): 22. https://doi.org/10.1016/j.futures.2020.102672.
SSRN
Working paper
In: AI and ethics
ISSN: 2730-5961
AbstractDisruptive technologies can have far-reaching impacts on society. They may challenge or destabilize cherished ethical values and disrupt legal systems. There is a convergent interest among ethicists and legal scholars in such "second-order disruptions" to norm systems. Thus far, however, ethical and legal approaches to technological norm-disruption have remained largely siloed. In this paper, we propose to integrate the existing 'dyadic' models of disruptive change in the ethical and legal spheres, and shift focus to the relations between and mutual shaping of values, technology, and law. We argue that a 'triadic' values-technology-regulation model—"the technology triad"—is more descriptively accurate, as it allows a better mapping of second-order impacts of technological changes (on values and norms, through changes in legal systems—or on legal systems, through changes in values and norms). Simultaneously, a triadic model serves to highlight a broader portfolio of ethical, technical, or regulatory interventions that can enable effective ethical triage of—and a more resilient response to—such Socially Disruptive Technologies. We illustrate the application of the triadic framework with two cases, one historical (how the adoption of the GDPR channeled and redirected the evolution of the ethical value of 'privacy' when that had been put under pressure by digital markets), and one anticipatory (looking at anticipated disruptions caused by the ongoing wave of generative AI systems).
In: The Era of Global Risk (2023). (eds. SJ Beard, Martin Rees, Catherine Richards & Clarissa Rios-Rojas). Open Book Publishers.
SSRN