The belief in the possibility of artificial intelligence (AI), given present computers, is the belief that all that is essential to human intelligence can be formalized. AI has not fulfilled early expectations in pattern recognition & problem solving. These tasks cannot be formalized. They necessarily involve a nonformal form of information processing which is possible only for embodied beings--where being embodied does not merely mean being able to move & to operate manipulators. The human world, with its recognizable objects, is organized by human beings using their embodied capacities to satisfy their embodied needs. There is no reason to suppose that a world organized in terms of the body should be accessible by other means. HA.
1. Introduction -- 2. What is Artificial Intelligence? -- 2.1. Definitions -- 2.2. History -- 2.3. State of play and future prospects -- 3. Bioethical inquiries about artificial intelligence -- 3.1. Bioethical issues common to weak and strong artificial intelligence -- 3.2. Bioethical issues resulting from strong artificial intelligence -- 3.2.1. Ontological discussions -- 3.2.2. Consequential discussions -- 4. Medicine and artificial intelligence -- 4.1. Use of artificial in health services -- 4.2. Main challenges in medical ethics -- 4.2.1. Confidentiality and privacy -- 4.2.2. Compassion, veracity and fidelity -- 4.2.3. Communication skills and case based approach -- 5. Conclusion.
"If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. Featuring contributions from leading experts and thinkers in artificial intelligence, RIsks of Artificial Intelligence is the first volume of collected chapters dedicated to examining the risks of AI. The books evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals." -- From back cover
This book presents the reader with a comprehensive and structured understanding of the ethics of Artificial Intelligence (AI). It describes the main ethical questions that arise from the use of AI in different areas, as well as the contribution of various academic disciplines such as legal policy, environmental sciences, and philosophy of technology to the study of AI. AI has become ubiquitous and is significantly changing our lives, in many cases, for the better, but it comes with ethical challenges. These challenges include issues with the possibility and consequences of autonomous AI systems, privacy and data protection, the development of a surveillance society, problems with the design of these technologies and inequalities in access to AI technologies. This book offers specialists an instrument to develop a rigorous understanding of the main debates in emerging ethical questions around AI. The book will be of great relevance to experts in applied and technology ethics and to students pursuing degrees in applied ethics and, more specifically, in AI ethics
Wie sind aus Kantischer Perspektive Künstliche Intelligenz (KI) und die starken Ansprüche ihrer philosophischen Vertreter zu begreifen bzw. zu bewerten? Was kann umgekehrt aus KI und ihrer Funktionsweise über die Geltungsansprüche der Kantischen Philosophie gelernt werden? Zur Beantwortung dieser Fragen werden verschiedene Aspekte wie z.B. Ich, Geist, Selbstbewusstsein, Ethik, Recht, Ästhetik in den Fokus genommen
"Featuring seventeen original essays on the ethics of Artificial Intelligence (AI) by some of the most prominent AI scientists and academic philosophers today, this volume represents the state-of-the-art thinking in this fast-growing field and highlights some of the central themes in AI and morality such as how to build ethics into AI, how to address mass unemployment as a result of automation, how to avoiding designing AI systems that perpetuate existing biases, and how to determine whether an AI is conscious. As AI technologies progress, questions about the ethics of AI, in both the near-future and the long-term, become more pressing than ever. Should a self-driving car prioritize the lives of the passengers over the lives of pedestrians? Should we as a society develop autonomous weapon systems that are capable of identifying and attacking a target without human intervention? What happens when AIs become smarter and more capable than us? Could they have greater than human moral status? Can we prevent superintelligent AIs from harming us or causing our extinction? At a critical time in this fast-moving debate, thirty leading academics and researchers at the forefront of AI technology development come together to explore these existential questions, including Aaron James (UC Irvine), Allan Dafoe (Oxford), Andrea Loreggia (Padova), Andrew Critch (UC Berkeley), Azim Shariff (Univ. of British Columbia), Carrick Flynn (Oxford), Cathy O'Neil (O'Neil Risk Consulting & Algorithmic Auditing), Eliezer Yudkowsky (Machine Intelligence Research Institute), Eric Schwitzgebel (UC Riverside), Frances Kamm (Rutgers), Francesca Rossi (IBM), Hanna Gunn (UC Merced), Iyad Rahwan (MIT), Jessica Taylor (Median Group), JF Bonnefon (Centre National de la Recherche Scientifique), K. Brent Venable (Tulane), Kate Devlin (King's College London), Mara Garza (UC Riverside), Nicholas Mattei (Tulane), Nick Bostrom (Oxford), Patrick LaVictoire (Lyft), Peter Asaro (The New School), Peter Railton (Michigan), S. Matthew Liao (NYU), Shannon Vallor (Santa Clara), Stephen Wolfram (Wolfram Research), Steve Petersen (Niagara), Stuart Russell (UC Berkeley), Susan Schneider (Univ. of Connecticut), Wendell Wallach (Yale)"--