Equal Societies, Autonomous Lives: Reconciling social equality and relational autonomy
In: Journal of social philosophy
ISSN: 1467-9833
4 Ergebnisse
Sortierung:
In: Journal of social philosophy
ISSN: 1467-9833
In: Public affairs quarterly: PAQ ; philosophical studies of public policy issues, Band 34, Heft 4, S. 340-367
ISSN: 2152-0542
AbstractThe status of indirect discrimination is ambiguous in the current literature. This paper addresses two contemporary and related debates. First, for some, indirect discrimination is not truly a distinct kind of discrimination, but it is simply a legal construct designed to address distributive inequalities between groups. Second, even if one accepts that indirect discrimination is a distinct type of discrimination, the connection between the two kinds of discrimination, direct and indirect, is debated. For some, they are distinct act-types, while for others, indirect discrimination should be conceived as a side effect of prior cases of direct discrimination. In this paper, I argue that indirect discrimination is a distinct act-type that can take place without being connected to prior instances of direct discrimination.
In: AI and ethics, Band 3, Heft 4, S. 1255-1269
ISSN: 2730-5961
AbstractThe use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.
In: French politics, Band 18, Heft 3, S. 293-313
ISSN: 1476-3427