When many people are involved in an activity, it is often difficult, if not impossible, to pinpoint who is morally responsible for what, a phenomenon known as the 'problem of many hands.' This term is increasingly used to describe problems with attributing individual responsibility in collective settings in such diverse areas as public administration, corporate management, law and regulation, technological development and innovation, healthcare, and finance. This volume provides an in-depth philosophical analysis of this problem, examining the notion of moral responsibility and distinguishing
Verfügbarkeit an Ihrem Standort wird überprüft
Dieses Buch ist auch in Ihrer Bibliothek verfügbar:
Abstract This commentary is a response to Contextual Integrity as a General Conceptual Tool for Evaluating Technological Change by Elizabeth O'Neill (Philosophy & Technology (2022)). It argues that while contextual integrity (CI) might be an useful addition to the toolkit of approaches for ethical technology assessment, a CI approach might not be able to uncover all morally relevant impacts of technological change. Moreover, the inherent conservatism of a CI approach might be problematic in cases in which we encounter new kinds of morally problematic situations, such as climate change, or when technology reinforces historically grown injustices.
Abstract We propose a pragmatist account of value change that helps to understand how and why values sometimes change due to technological developments. Inspired by John Dewey's writings on value, we propose to understand values as evaluative devices that carry over from earlier experiences and that are to some extent shared in society. We discuss the various functions that values fulfil in moral inquiry and propose a conceptual framework that helps to understand value change as the interaction between three manifestations of value distinguished by Dewey, i.e., "immediate value," "values as the result of inquiry" and "generalized values." We show how this framework helps to distinguish three types of value change: value dynamism, value adaptation, and value emergence, and we illustrate these with examples from the domain of technology. We argue that our account helps to better understand how technology may induce value change, namely through the creation of what Dewey calls indeterminate situations, and we show how our account can integrate several insights on (techno)moral change offered by other authors.
The ongoing energy transition toward more sustainable energy systems implies a change in the values for which such systems are designed. The energy transition however is not just about sustainability but also about values like energy security and affordability, and we witness the emergence of new values like energy justice and energy democracy. How can we understand such value changes and how can or should they affect the design of future energy systems? This introduction to the special section on value change in energy systems introduces the main themes and questions. It discusses different understandings of values and value change, explains why the topic is important and how it can be methodologically studied.
AbstractValue sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that 'disembody' the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.
AbstractThe notion of responsible innovation suggests that innovators carry additional responsibilities (to society, stakeholders, users) beyond those commonly suggested. In this paper, we will discuss the meaning of these novel responsibilities focusing on two philosophical problems of attributing such responsibilities to innovators. The first is the allocation of responsibilities to innovators. Innovation is a process that involves a multiplicity of agents and unpredictable, far-reaching causal chains from innovation to social impacts, which creates great uncertainty. A second problem is constituted by possible trade-offs between different kinds of responsibility. It is evident that attributing backward-looking responsibility for product failures diminishes the willingness to learn about such defects and to take forward-looking responsibility. We will argue that these problems can be overcome by elaborating what it is exactly that innovators are responsible for. In this manner, we will distinguish more clearly between holding responsible and taking responsibility. This opens a space for 'supererogatory' responsibilities. Second, we will argue that both innovation processes and outcomes can be objects of innovators' responsibility. Third, we will analyze different kinds of responsibility (blameworthiness, accountability, liability, obligation and virtue) and show that the functions of their attribution are not necessarily contradictory. Based on this conceptual refinement, we will argue that accountability, responsibility-as-virtue and the willingness to take responsibility are crucial for responsible innovation.
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 18, Heft 3, S. 267-272
In this article, we develop an approach for the moral assessment of research and development (R & D) networks on the basis of the reflective equilibrium approach proposed by Rawls and Daniels. The reflective equilibrium approach aims at coherence between moral judgments, principles, and background theories. We use this approach because it takes seriously the moral judgments of the actors involved in R & D, whereas it also leaves room for critical reflection about these judgments. It is shown that two norms, namely reflective learning and openness and inclusiveness, which are used in the literature on policy and technological networks, contribute to achieving a justified overlapping consensus. We apply the approach to a case study about the development of an innovative sewage treatment technology and show how in this case the two norms are or could be instrumental in achieving a justified overlapping consensus on relevant moral issues.
Engineering ethics and science and technology studies (STS) have until now developed as separate enterprises. The authors argue that they can learn a lot from each other. STS insights can help make engineering ethics open the black box of technology and help discern ethical issues in engineering design. Engineering ethics, on the other hand, might help STS to overcome its normative sterility. The contributions in this special issue show in various ways how the gap between STS and engineering ethics might be overcome. In this editorial introduction, the authors discuss the various contributions briefly and delve into the way the various authors conceptualize the engineering design process and the consequences of those conceptualizations for what ethical issues become visible. They also discuss the implications for the responsibility of engineers for technological development.
The authors explore whether the need for ethical reflection on the part of designing engineers is dependent on the type of design process. They use Vincenti's (1992) distinction between normal and radical design and different levels of design hierarchy. These two dimensions are coupled with the concept of ill-structured problems, which are problems in which possible solutions cannot be ordered on a scale from better to worse. Design problems are better structured at lower hierarchical levels and in cases of normal design. Better structured design problems require less ethical reflection on the part of designing engineers if such situations are characterized by the existence of generally accepted normative frameworks. Engineers could then deal with moral problems within the bounds of such frameworks and without the need for further reflection. On the basis of a number of empirical cases, the authors explore whether these ideas hold water. They discuss four cases ranging from a high-level radical design process to a low-level normal design process.
It is well over a decade since John Fischer and Mark Ravizza - and before them, Jay Wallace and Daniel Dennett - defended responsibility from the threat of determinism. But defending responsibility from determinism is a potentially endless and largely negative enterprise; it can go on for as long as dissenting voices remain, and although such work strengthens the theoretical foundations of these theories, it won't necessarily build anything on top of those foundations, nor will it move these theories into new territory or explain how to apply them to practical contexts. To this end, the papers
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
AbstractAlgorithms and AI tools are becoming increasingly influential artefacts in commercial and governance contexts. Algorithms and AI tools are not value neutral; to some extent they must be rendered knowable and known as objects, and in their implementation and deployment, to see clearly and understand their implications for moral values, and what actions can be undertaken to optimise them in their design and use towards ethical goals, or whether they are even suitable for particular goals. Transparency is a term with variable uses and interpretations, a problem which can challenge its use in design and policy. Here, we attempt to further clarify transparency. We argue that transparency is the state of affairs that obtains when relevant and understandable information about some X is available and accessible to some target audience (A), so that this information is sufficient for A for the purpose (P). Moreover, we connect this conceptualisation with transparency's moral value, where P is to provide an account about X's supportive or conflicting relationship with relevant values and goals. Such teleological ends in our context here can be the ability to account for the degree to which an algorithm, process or organisation respects certain values and is conducive to (social) goals.
AbstractThe collection and use of personal data on citizens in the design and deployment of algorithms in the domain of justice and security is a sensitive topic. Values like fairness, autonomy, privacy, accuracy, transparency and property are at stake. Negative examples of algorithms that propagate or exacerbate biases, inequalities or injustices have received ample attention, both in academia and in popular media. To supplement this view, we will discuss two positive examples of Responsible Innovation (RI): the design and deployment of algorithms in decision support, with good intentions and careful approaches. We then explore potential, unintended, undesirable, higher-order effects of algorithms—effects that may occur despite good intentions and careful approaches. We do that by engaging withanticipationandresponsiveness, two key dimensions of Responsible Innovation. We close the paper with proposing a framework and a series of tentative recommendations to promote anticipation and responsiveness in the design and deployment of algorithms in decision support in the domain of justice and security.
"The development and introduction of a new technology to society can be viewed as an experimental process, full of uncertainties, which are only gradually reduced as the technology is employed. Unexpected developments may trigger an experimental process in which society must find new ways to deal with the uncertainties posed. This book explores how the experimental perspective determines what ethical issues new technologies raise and how it helps morally evaluate their introduction. Expert contributors highlight the uncertainties that accompany the process, identify the social and ethical challenges they give rise to, and propose strategies to manage them. Focusing on the introduction of new technologies and experimentation as ways to perceive new developments and changing contexts, a key theme of the book is how to approach the moral issues raised by new technology and understand the role of experimentation in exploring these matters. "--Provided by publisher.