Accounting For Violent Conflict Risk In Planetary Defense Decisions
In: Acta Astronautica, vol. 178 (January), pages 15-23, 2020
48 Ergebnisse
Sortierung:
In: Acta Astronautica, vol. 178 (January), pages 15-23, 2020
SSRN
In: Forthcoming, Philosophy & Technology, DOI 10.1007/s13347-020-00416-5
SSRN
In: Futures, vol. 123 (October 2020), article 102608, DOI 10.1016/j.futures.2020.102608
SSRN
Working paper
In: AI & Society, Band 35, Heft 1 (March 2020)
SSRN
In: In James S.J. Schwartz and Tony Milligan (editors), 2016, The Ethics of Space Exploration. Berlin: Springer, pages 109-123.
SSRN
SSRN
In: Contemporary Security Policy, Band 36, Heft 2 (August)
SSRN
In: Futures, Band 72 (September)
SSRN
In: Contemporary Security Policy, Band 36, Heft 1 (April)
SSRN
In: Futures: the journal of policy, planning and futures studies, Band 162, S. 103432
ISSN: 1873-6378
In: Risk analysis: an international journal
ISSN: 1539-6924
AbstractThis article presents a risk analysis of large language models (LLMs), a type of "generative" artificial intelligence (AI) system that produces text, commonly in response to textual inputs from human users. The article is specifically focused on the risk of LLMs causing an extreme catastrophe in which they do something akin to taking over the world and killing everyone. The possibility of LLM takeover catastrophe has been a major point of public discussion since the recent release of remarkably capable LLMs such as ChatGPT and GPT‐4. This arguably marks the first time when actual AI systems (and not hypothetical future systems) have sparked concern about takeover catastrophe. The article's analysis compares (A) characteristics of AI systems that may be needed for takeover, as identified in prior theoretical literature on AI takeover risk, with (B) characteristics observed in current LLMs. This comparison reveals that the capabilities of current LLMs appear to fall well short of what may be needed for takeover catastrophe. Future LLMs may be similarly incapable due to fundamental limitations of deep learning algorithms. However, divided expert opinion on deep learning and surprise capabilities found in current LLMs suggests some risk of takeover catastrophe from future LLMs. LLM governance should monitor for changes in takeover characteristics and be prepared to proceed more aggressively if warning signs emerge. Unless and until such signs emerge, more aggressive governance measures may be unwarranted.
In: AI and ethics
ISSN: 2730-5961
In: Risk analysis: an international journal, Band 42, Heft 9, S. 2122-2124
ISSN: 1539-6924
In: Futures: the journal of policy, planning and futures studies, Band 123, S. 102608
In: Philosophy & technology, Band 34, Heft S1, S. 45-63
ISSN: 2210-5441