Social Justice and Market Metaphyics: A Critical Discussion of Philosophical Approaches to Digital Divides
In: Information Technology and Social Justice
29 Ergebnisse
Sortierung:
In: Information Technology and Social Justice
In: Information Technology and Social Justice, S. 148-170
In: Electronic Government Strategies and Implementation, S. 1-19
In: International journal of information management, Band 74, S. 102700
ISSN: 0268-4012
In: AI and ethics, Band 3, Heft 3, S. 745-767
ISSN: 2730-5961
AbstractEthical and social concerns are a key obstacle to the adoption of artificial intelligence (AI) in the life sciences and beyond. The discussion of these issues has intensified in recent years and led to a number of approaches, tools and initiatives. Key amongst them is the idea of ex-ante impact assessments that aim to identify issues at the early stages of development. One prominent example of such ex-ante impact assessment is the European Union's (EU) Assessment list for Trustworthy AI (ALTAI). This article uses the findings of a large-scale application of the ALTAI to a large neuro-informatics project as an exemplar to demonstrate the effectiveness and limitations of the ALTAI in practice. The article shows that ex-ante impact assessment has the potential to help identify and address ethical and social issues. However, they need to be understood as part of a broader socio-technical ecosystem of AI. For ALTAI and related approaches to be useful in bio-medical research, they should be interpreted from a systems theory perspective which allows for their integration into the rich set of tools, legislation and approaches. The paper argues that ex-ante impact assessments have the best chance of being successful if seen applied in conjunction with other approaches in the context of the overall AI ecosystem.
In: Journal of global information technology management: JGITM, Band 7, Heft 4, S. 47-65
ISSN: 2333-6846
In: SpringerBriefs in Research and Innovation Governance
Chapter 1. Introduction -- Chapter 2. Why Case Studies? -- Chapter 3. Discrimination -- Chapter 4. Privacy -- Chapter 5. Surveillance capitalism -- Chapter 6. Manipulation -- Chapter 7. Life, Liberty and Security of Person -- Chapter 8. Dignity -- Chapter 9. Unemployment -- Chapter 10. International Justice -- Chapter 11. Conclusion -- Index.
In: Science and public policy: journal of the Science Policy Foundation, S. scw069
ISSN: 1471-5430
In: Social epistemology: a journal of knowledge, culture and policy, Band 34, Heft 4, S. 382-394
ISSN: 1464-5297
In: Policy and society, Band 40, Heft 2, S. 158-177
ISSN: 1839-3373
ABSTRACT
This paper examines how the governance in AI policy documents have been framed as way to resolve public controversies surrounding AI. It draws on the studies of governance of emerging technologies, the concept of policy framing, and analysis of 49 recent policy documents dedicated to AI which have been prepared in the context of technological hype expecting fast advances of AI that will fundamentally change economy and society. The hype about AI is accompanied by major public controversy about positive and negative effects of AI. Against the backdrop of this policy controversy, governance emerges as one of the frames that diagnoses the problems and offers prescriptions. Accordingly, the current governance characterized by oligopoly of a small number of large companies is indicated as one of the reasons for problems such as lack of consideration of societal needs and concerns. To address these problems, governance frame in AI policy documents assigns more active and collaborative roles to the state and society. Amid public controversies, the state is assigned the roles of promoting and facilitating AI development while at the same time being a guarantor of risk mitigation and enabler of societal engagement. High expectations are assigned to public engagement with multiple publics as a way to increase diversity, representation and equality in AI development and use. While this governance frame might have a normative appeal, it is not specific about addressing some well-known challenges of the proposed governance mode such as risks of capture by vested interests or difficulties to achieve consensus.
The increasing use of information and communication technologies (ICTs) to help facilitate neuroscience adds a new level of complexity to the question of how ethical issues of such research can be identified and addressed. Current research ethics practice, based on ethics reviews by institutional review boards (IRB) and underpinned by ethical principlism, has been widely criticized. In this article, we develop an alternative way of approaching ethics in neuro-ICT research, based on discourse ethics, which implements Responsible Research and Innovation (RRI) through dialogues. We draw on our work in Ethics Support, using the Human Brain Project (HBP) as empirical evidence of the viability of this approach. ; Author Contributions: BS led the development of the ethics support work package and processes, wrote the first draft of the article and coordinated the contributions. SA worked on legal review and data protection and was part of the ethics check process. BF led the data governance and was involved in compliance management. MG led the ethics rapporteur programme and the PORE process. WK led the ethics compliance process. IU led the ethics awareness work. All authors worked closely together on the work described in the article. All have contributed to the text related to their area of expertise and contributed to the overall article, including through reviews and revision of earlier drafts. Funding: This project/research has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 720270 (HBP SGA1); No. 785907 (Human Brain Project SGA2) and under the Framework Partnership Agreement No. 650003 (HBP FPA). ; Human Brain Project
BASE
In: Futures, Band 102, S. 114-124
In: Science and public policy: journal of the Science Policy Foundation, Band 47, Heft 5, S. 720-732
ISSN: 1471-5430
AbstractScience–policy organisations are expected to be reflexive of their political influence on research and society. In this long-standing discourse on institutional reflexivity, formal organisations have largely been considered as a whole, and from a structural, or systemic perspective, whereas much less is known about everyday organisational practices; how individual organisational members reflect on and act upon their own as well as their organisation's limits of knowledge and pre-commitments, if at all. We address this gap through an analysis of qualitative interviews with one national funding institution's staff overseeing funding for research into information and communication technologies. We develop a bridging concept between individual and institutional reflexivity, which we call 'collaborative reflexivity'. Through collaborative reflexive processes, individual employees contribute to the entire organisation's institutional reflexivity. Our findings help to better understand 'responsible' behaviour in funding processes, as part of the growing international movement of Responsible Research and Innovation.
Drawing on more than eight years working to implement Responsible Research and Innovation (RRI) in the Human Brain Project, a large EU-funded research project that brings together neuroscience, computing, social sciences, and the humanities, and one of the largest investments in RRI in one project, this article offers insights on RRI and explores its possible future. We focus on the question of how RRI can have long-lasting impact and persist beyond the time horizon of funded projects. For this purpose, we suggest the concept of 'responsibility by design' which is intended to encapsulate the idea of embedding RRI in research and innovation in a way that makes it part of the fabric of the resulting outcomes, in our case, a distributed European Research Infrastructure. ; European Union's Horizon 2020 Framework Programme for Research and Innovation 720270 785907 945539 650003 ; Versión publicada - versión final del editor
BASE