Suchergebnisse
Filter
5 Ergebnisse
Sortierung:
Challenges of Aligning Artificial Intelligence with Human Values
In: Acta Baltica historiae et philosophiae scientiarum: ABHPS, Band 8, Heft 2, S. 54-72
ISSN: 2228-2017
As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI 'value alignment problem' faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: "Which values or whose values should artificial intelligence align with?" My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.
From Identity Verification to Behavior Prediction: Ethical Implications of Second Generation Biometrics
In: Review of policy research, Band 29, Heft 1, S. 21-36
ISSN: 1541-1338
AbstractIn this article, we will compare ethical issues raised by first and second generation biometrics. First generation biometrics use characteristics readily visible to the naked eye to ensure that the person identified is the person he claims to be, whereas second generation biometrics focus on behavioral patterns with the aim of predicting suspicious behavior or hostile intentions. While the collection of biometric features for identification is visible to the person involved, capturing biometric features from a distance may go unnoticed. Our study of a range of U.S. and European projects of second generation biometrics, particularly of Future Attribute Screening Technology and Automatic Detection of Abnormal Behaviour and Threats in crowded Spaces, shows that if data subjects are not aware of the processing of their data, and if behavioral patterns are interpreted without any knowledge of the subject's will and motives, there are risks of discrimination and stigmatization. Thus, second generation biometrics raise some new ethical concerns besides issues of integrity, privacy, and data protection and further underscore the importance of the principle of informed consent in order to maintain public trust.