Digital apps using Bluetooth to log proximity events (henceforth, digital contact tracing) are increasingly supported by technologists and governments. By and large, the public debate on this matter focuses on privacy, with experts from both law and technology offering very concrete proposals and participating to a lively debate. Far less attention is paid to effective incentives and their fairness. This paper aims to fill this gap by offering a practical, workable solution for a promising incentive, justified by the ethical principles of non-maleficence, beneficence, autonomy and justice. This incentive is a free phone optimised for running such app.
AbstractAlgorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what "fairness" might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity) and separation (also known as equalized odds), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.
AbstractHere, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g., medical diagnosis, sentencing). Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain.
In: Michele Loi and Markus Christen, "Choosing How to Discriminate: Navigating Ethical Trade-Offs in Fair Algorithmic Design for the Insurance Sector," Philosophy & Technology, March 13, 2021, https://doi.org/10.1007/s13347-021-00444-9.
In: Definitive (and shortened) version published in Italian as a book chapter in "Per cosa lottare. Le frontiere del progressismo", edited by Enrico Biale and Corrado Fumagalli, Fondazione Giangiacomo Feltrinelli, Milano, 2019 ISBN: 9788868353476
Chapter 1. Introduction -- Part 1. Foundations -- Chapter 2. Basic Concepts and Models of Cybersecurity (Endre Bangerter) -- Chapter 3. Core values in Cybersecurity (Ibo van de Poel) -- Chapter 4. Ethical Frameworks for Cybersecurity (Mary Manjikian) -- Chapter 5. Legislative Framework for Cybersecurity (Gloria Gonzalez Fuster, Lina Jasmontaite) -- Part 2 – Problems -- Chapter 6. Cybersecurity in Health (Karsten Weber, Nadine Kleine) -- Chapter 7. Cybersecurity in Business (Gwenyth Morgan, Bert Gordijn) -- Chapter 8. Cybersecurity of Critical Infrastructures (Emad Yaghmaei, TU Delft, Michele Loi) -- Chapter 9. Cybersecurity and Cyberwar (Mariarosario Taddeo) -- Chapter 10. Cybersecurity and the State (Harald Zwingelberg, Eva Schlehahn) -- Chapter 11. Cybersecurity and Democracy -- Chapter 12. Ethical and Unethical Hacking (David-Olivier Jaquet-Chiffelle) -- Chapter 13. Ethical Dilemmas for Cybersecurity Service Providers -- Chapter 14. Cybersecurity and the Individual -- Part 3 – Solutions -- Chapter 15. Requirements on the Policy Level to Ensure Cyberpeace -- Chapter 16. Policies for Data Sharing and Data Protection for Security Software Providers (Alexey Kirichenko) -- Chapter 17. A framework for Ethical Cyber-Defence for Companies -- Chapter 18. Privacy-Protecting Cybersecurity Technologies (Josep Domingo-Ferrer) -- Chapter 19. Guidelines for Medical Professionals to Ensure Cybersecurity in Digital Healthcare -- Chapter 20. Conclusion.
This open access book provides the first comprehensive collection of papers that provide an integrative view on cybersecurity. It discusses theories, problems and solutions on the relevant ethical issues involved. This work is sorely needed in a world where cybersecurity has become indispensable to protect trust and confidence in the digital infrastructure whilst respecting fundamental values like equality, fairness, freedom, or privacy. The book has a strong practical focus as it includes case studies outlining ethical issues in cybersecurity and presenting guidelines and other measures to tackle those issues. It is thus not only relevant for academics but also for practitioners in cybersecurity such as providers of security software, governmental CERTs or Chief Security Officers in companies.
This open access book provides the first comprehensive collection of papers that provide an integrative view on cybersecurity. It discusses theories, problems and solutions on the relevant ethical issues involved. This work is sorely needed in a world where cybersecurity has become indispensable to protect trust and confidence in the digital infrastructure whilst respecting fundamental values like equality, fairness, freedom, or privacy. The book has a strong practical focus as it includes case studies outlining ethical issues in cybersecurity and presenting guidelines and other measures to tackle those issues. It is thus not only relevant for academics but also for practitioners in cybersecurity such as providers of security software, governmental CERTs or Chief Security Officers in companies.
AbstractTrust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including q-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accounts of trust that attempt to identify the special ingredient of reliance and trust relationships, our theory characterizes trust as a quantitative property of certain relations of reliance that can be quantified and expressed as a scalar quantity. Our theory is applicable to both human–human and human–artificial agent interactions, as it is agnostic with respect to the concrete realization of trustworthiness properties, and is compatible with many views differing on which properties contribute to trust and trustworthiness. Finally, as our mathematical models make the quantitative features of trust measurable, they provide empirical studies on trust with a rigorous methodology for its measurement.
Abstract Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo's concept of "e-trust," we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs.