The autumn of humanity -- Part I. Automation: The case for technological unemployment -- Why you should hate your job -- Giving techno-pessimism its due -- Part II. Utopia: In search of utopia -- The cyborg utopia -- The virtual utopia -- The unending quest.
AbstractWhat is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each shares the view that technology plays a key role in ensuring that the good prevails over the bad. Whatever its strength, to defend this stance, one must flesh out an argument with four key premises. Each of these premises is highly controversial and can be subjected to a number of critiques. The paper discusses five such critiques in detail (the values critique, the treadmill critique, the sustainability critique, the irrationality critique and the insufficiency critique). The paper also considers possible responses from the techno-optimist. Finally, it is concluded that although strong forms of techno-optimism are not intellectually defensible, a modest, agency-based version of techno-optimism may be defensible.
AbstractThere is a concern that the widespread deployment of autonomous machines will open up a number of 'responsibility gaps' throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on 'plugging' or 'dissolving' the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic (illusionism); sometimes we delegate the tragic choice to others (delegation); sometimes we make the choice ourselves and bear the psychological consequences (responsibilisation). Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous.
AbstractRapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency, they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people's ability to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem.
AbstractAI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.
AbstractEthics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate 'ethics of X' or 'X ethics' for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.
Perspectives from philosophy, psychology religious studies, economics, and law on the possible future of robot-human sexual relationships. Sexbots are coming. Given the pace of technological advances, it is inevitable that realistic robots specifically designed for people's sexual gratification will be developed in the not-too-distant future. Despite popular culture's fascination with the topic, and the emergence of the much-publicized Campaign Against Sex Robots, there has been little academic research on the social, philosophical, moral, and legal implications of robot sex. This book fills the gap, offering perspectives from philosophy, psychology, religious studies, economics, and law on the possible future of robot-human sexual relationships. Contributors discuss what a sex robot is, if they exist, why we should take the issue seriously, and what it means to "have sex" with a robot. They make the case for developing sex robots, arguing for their beneficial nature, and the case against it, on religious and moral grounds; they consider the subject from the robot's perspective, addressing such issues as consent and agency; and they ask whether it is possible for a human to form a mutually satisfying, loving relationship with a robot. Finally, they speculate about the future of human-robot sexual interaction, considering the social acceptability of sex robots and the possible effect on society.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
AbstractAssume that a researcher uncovers a major problem with how social media are currently used. What sort of challenges arise when they must subsequently decide whether or not to use social media to create awareness about this problem? This situation routinely occurs as ethicists navigate choices regarding how to effect change and potentially remedy the problems they uncover. In this article, challenges related to new technologies and what is often referred to as 'Big Tech' are emphasized. We present what we refer to as the AI ethicist's dilemma, which emerges when an AI ethicist has to consider how their own success in communicating an identified problem is associated with a high risk of decreasing the chances of successfully remedying the problem. We examine how the ethicist can resolve the dilemma and arrive at ethically sound paths of action through combining three ethical theories: virtue ethics, deontological ethics and consequentialist ethics. The article concludes that attempting to change the world of Big Tech only using the technologies and tools they provide will at times prove to be counter-productive, and that political and other more disruptive avenues of action should also be seriously considered by ethicists who want to effect long-term change. Both strategies have advantages and disadvantages, and a combination might be desirable to achieve these advantages and mitigate some of the disadvantages discussed.
In: Sætra, H.S., Coeckelbergh, M. & Danaher, J. The AI ethicist's dilemma: fighting Big Tech by supporting Big Tech. AI Ethics (2021). https://doi.org/10.1007/s43681-021-00123-7
This article argues that access to meaningful sexual experience should be included within the set of the goods that are subject to principles of distributive justice. It argues that some people are currently unjustly excluded from meaningful sexual experience and it is not implausible to suggest that they might thereby have certain claim rights to sexual inclusion. This does not entail that anyone has a right to sex with another person, but it does entail that duties may be imposed on society to foster greater sexual inclusion. This is a controversial thesis and this article addresses this controversy by engaging with four major objections to it: the misogyny objection; the impossibility objection; the stigmatisation objection; and the unjust social engineering objection.