Apprehending AI moral purpose in practical wisdom
In: AI & society: the journal of human-centred systems and machine intelligence, Band 39, Heft 3, S. 1335-1348
ISSN: 1435-5655
12 Ergebnisse
Sortierung:
In: AI & society: the journal of human-centred systems and machine intelligence, Band 39, Heft 3, S. 1335-1348
ISSN: 1435-5655
In: State Government: journal of state affairs, Band 12, S. 117-118
ISSN: 0039-0097
In: The annals of the American Academy of Political and Social Science, Band 183, Heft 1, S. 189-198
ISSN: 1552-3349
In: State Government: journal of state affairs, S. 12-13
ISSN: 0039-0097
In: The annals of the American Academy of Political and Social Science, Band 129, Heft 1, S. 16-20
ISSN: 1552-3349
In: Proceedings of the Academy of Political Science in the City of New York, Band 11, Heft 1, S. 95
In: Philosophy & technology, Band 35, Heft 2
ISSN: 2210-5441
In: AI and ethics, Band 2, Heft 4, S. 801-814
ISSN: 2730-5961
AbstractIn the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the 'black box' and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London's position. In particular, we make two claims. First, we claim that London's solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London's views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.
In: Philosophy & technology, Band 34, Heft 4, S. 1819-1846
ISSN: 2210-5441
AbstractIn the past few years, the ethical ramifications of AI technologies (in particular data science) have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, in this paper, we formulate a different approach to ethics in data science which is based on a different conception of "being ethical" and, ultimately, of what it means to promote a morally responsible data science. First, we develop the idea that, rather than only compliance, ethical decision-making consists in using certain moral abilities (e.g., virtues), which are cultivated by practicing and exercising them in the data science process. An aspect of virtue development that we discuss here is moral attention, which is the ability of data scientists to identify the ethical relevance of their own technical decisions in data science activities. Next, by elaborating on the capability approach, we define a technical act as ethically relevant when it impacts one or more of the basic human capabilities of data subjects. Therefore, rather than "applying ethics" (which can be mindless), data scientists should cultivate ethics as a form of reflection on how technical choices and ethical impacts shape one another. Finally, we show how this microethical framework concretely works, by dissecting the ethical dimension of the technical procedures involved in data understanding and preparation of electronic health records.
Codified at Salisbury Cathedral in southern England during the twelfth and thirteenth centuries, the Use of Salisbury, or Sarum Use, was an influential model of church structure and liturgy across England until the middle of the sixteenth century. Within this model, certain distinctive features are observable, including patterns of ritual observance in such liturgies as the Office of Compline, prayed at the end of the day. This study examines the structure of the Sarum Compline Office, and the phenomenon of polyphonic music created for its observance, during the forty-day penitential season of Lent. This thesis specifically explores ways in which early Tudor-era composers, in the shadow of the turbulent politics of the Tudor era, treated the ritual musically in terms of the creation and enactment of polyphonic music within the context of normative plainchant, in turn considering how the ritual format, architectural style, and acoustical characteristics of spaces such as Salisbury Cathedral impacted the performative contexts of this composed music. The study focuses on the following questions: (1) Based on ritual indications from both the printed sources of the period ca. 1500, and the older manuscript sources of Salisbury Cathedral, how was the chanted Office of Compline enacted liturgically, musically, and spatially, during the Lenten season? (2) What items of composed polyphony exist specifically for performance in the Sarum Use for Compline during Lent? (3) How did the use of this composed polyphony inform the ritual and liturgical ceremonial discussed above? (4) What do the ritual rubrics of Sarum Use have to say to us about the performance practices of the polyphony? (5) What do the local contexts of the parish church, cathedral church, university/collegiate chapel, and household chapel(s) bring to bear on the performance practices of this ritual music? The study concludes with recommendations for further research, and a performance edition of the plainchant for the Compline Office during the Lenten season, as well as transcriptions of selected polyphonic compositions (by John Taverner, John Sheppard, Robert White, Philip Alcock, John Norman, Philip ap Rhys, and John Redford) associated with the Compline ritual from the period ca. 1485-1558.
BASE
This book provides an integrative interdisciplinary view of how intellectual and moral virtues are understood in two separate practices, science and music. The authors engage with philosophical and psychological accounts of virtue to understand scientists' and musicians' understandings of intellectual and moral virtues. They present empirical evidence substantiating the MacIntyrean claim that traditions and practices are central to understanding the virtues."
In: Palgrave pivot
This book provides an integrative interdisciplinary view of how intellectual and moral virtues are understood in two separate practices, science and music. The authors engage with philosophical and psychological accounts of virtue to understand scientists and musicians understandings of intellectual and moral virtues. They present empirical evidence substantiating the MacIntyrean claim that traditions and practices are central to understanding the virtues." Timothy S. Reilly is Assistant Professor of Psychology at Ave Maria University. He studies virtue, purpose, and moral development. His work draws on moral, developmental, personality, and educational psychology to understand how individuals and groups realize their potential. Keke Kaikhosroshvili is a learning scientist and project manager with Zavvy, an employee enablement platform. Stefanie Israel de Souza is Assistant Professor of Sociology and Criminology and Justice Studies at Cal Poly Humboldt, US. Mark Graves a former research assistant professor at the University of Notre Dame, US, holds a PhD in computer science, and has completed fellowships in genomics, moral psychology, and moral theology. Darcia Narvaez is Professor Emerita of Psychology at the University of Notre Dame.