The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law
In: Tulane Law Review 97 (2022): 149
18 Ergebnisse
Sortierung:
In: Tulane Law Review 97 (2022): 149
SSRN
In: Wachter, S. How Fair AI Can Make Us Richer. European Data Protection Law Review. Volume 7, Issue 3 (2021). pp. 367 - 372. DOI: https://doi.org/10.21552/edpl/2021/3/5
SSRN
In: Berkeley Technology Law Journal, Band 35, Heft 2
SSRN
In: Computer Law & Security Review, Band (3), Heft 436-449
SSRN
Working paper
SSRN
Working paper
In: https://doi.org/10.7916/d8-g10s-ka92
Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people's privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ). This Article shows that individuals are granted little control or oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively "economy class" personal data in the General Data Protection Regulation (GDPR). Data subjects' rights to know about (Articles 13–15), rectify (Article 16), delete (Article 17), object to (Article 21), or port (Article 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Article 9) or remedies to challenge inferences or important decisions based on them (Article 22(3)). This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe's new Copyright Directive and Trade Secrets Directive also fail to close the GDPR's accountability gaps concerning inferences. This Article argues that a new data protection right, the "right to reasonable inferences," is needed to help close the accountability gap currently posed by "high risk inferences," meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.
BASE
In: Columbia Business Law Review, 2019(2)
SSRN
SSRN
In: Regulation & governance, Band 18, Heft 1, S. 3-32
ISSN: 1748-5991
AbstractIn its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of "trustworthiness" with "acceptability" in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
In: Michigan Technology Law Review (2023)
SSRN
SSRN
In: Common Market Law Review, Band 58, Heft 3, S. 719-750
ISSN: 0165-0750
Online behavioural advertising (OBA) relies on inferential analytics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts with consumers' preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The technology's risks are exacerbated by the market power of ad intermediaries. This article shows how the Unfair Commercial Practices Directive (UCPD) can protect consumers from behavioural exploitation by incorporating market power analysis. Drawing on current research in economic theory, it argues for applying a stricter average consumer test if the market for ad intermediaries is highly concentrated. This stricter test should neutralize negative effects of behavioural targeting on consumer welfare. The article shows how OBA can amount to a misleading action and/or a misleading omission under Articles 6 and 7 UCPD, as well as an aggressive practice under Article 8 UCPD. It further considers how the recent legislative proposals by the European Commission to enact a Digital Markets Act (DMA) and a Digital Services Act (DSA) may interact with the UCPD and the suggested stricter average consumer test.
In: Computer Law & Security Review, 43 (2021): 105613.
SSRN
In: West Virginia Law Review, Band 123, Heft 3
SSRN
In: Harvard Journal of Law & Technology, 31 (2), 2018
SSRN
Working paper