How Fair AI Can Make Us Richer
In: European data protection law review: EdpL, Band 7, Heft 3, S. 367-372
ISSN: 2364-284X
10 Ergebnisse
Sortierung:
In: European data protection law review: EdpL, Band 7, Heft 3, S. 367-372
ISSN: 2364-284X
In the Internet of Things (IoT), identification and access control technologies provide essential infrastructure to link data between a user's devices with unique identities, and provide seamless and linked up services. At the same time, profiling methods based on linked records can reveal unexpected details about users' identity and private life, which can conflict with privacy rights and lead to economic, social, and other forms of discriminatory treatment. A balance must be struck between identification and access control required for the IoT to function and user rights to privacy and identity. Striking this balance is not an easy task because of weaknesses in cybersecurity and anonymisation techniques. The EU General Data Protection Regulation (GDPR), set to come into force in May 2018, may provide essential guidance to achieve a fair balance between the interests of IoT providers and users. Through a review of academic and policy literature, this paper maps the inherent tension between privacy and identifiability in the IoT. It focuses on four challenges: (1) profiling, inference, and discrimination; (2) control and context-sensitive sharing of identity; (3) consent and uncertainty; and (4) honesty, trust, and transparency. The paper will then examine the extent to which several standards defined in the GDPR will provide meaningful protection for privacy and control over identity for users of IoT. The paper concludes that in order to minimise the privacy impact of the conflicts between data protection principles and identification in the IoT, GDPR standards urgently require further specification and implementation into the design and deployment of IoT technologies.
BASE
In: https://ora.ox.ac.uk/objects/uuid:c755300a-a03f-4900-a69a-1ca359794255
Online behaYioural adYertising (µOBA¶) relies on inferential anal\tics to target consumers based on data about their online behaviour. While the technology can improve the matching of adverts Zith consumers¶ preferences, it also poses risks to consumer welfare as consumers face offer discrimination and the exploitation of their cognitive errors. The technolog\¶s risks are exacerbated by the market power of ad intermediaries. This article shows how the Unfair Commercial Practices Directive (UCPD) can protect consumers from behavioural exploitation through incorporating market power analysis. By drawing on current research in economic theory, it argues for applying a stricter average consumer test if the market for ad intermediaries is highly concentrated. This stricter test should neutralize negative effects of behavioural targeting on consumer welfare. The article shows how OBA can amount to a misleading action and/or a misleading omission according to Articles 6 and 7 UCPD as well as an aggressive practice according to Article 8 UCPD. It further considers how the recent legislative proposals by the European Commission to enact a Digital Markets Act (DMA) and a Digital Services Act (DSA) may interact with the UCPD and the suggested stricter average consumer test.
BASE
In its attempt to better regulate the platform economy, the European Commission recently proposed a Digital Markets Act (DMA) and a Digital Services Act (DSA). While the DMA addresses worries about digital markets not functioning properly, the DSA is concerned with societal harms stemming from the dissemination of (illegal) content on platforms. Both proposals focus on the relative size of platforms. The DMA applies to 'gatekeeper' platforms and the DSA has a special regime of scrutiny for 'very large online platforms' (VLOPs). Focusing on size, however, can have negative consequences for the enforcement of the DSA: First, risks disseminated by platforms below the VLOP-threshold reside in a regulatory blind spot. Second, VLOPs may leverage their market power against their new mandatory auditors and risk assessors, a threat theorised as 'audit capture' in this article. As a result, societal risks may remain undiscovered or downplayed and consumers and citizens may be harmed. This article traces the origin of the size criteria in the legislative history of the DMA and DSA proposals. It argues for safeguards against audit capture and adverse incentive structures in the DSA. The article draws on the debate on audit reform in the aftermath of the global financial crisis of 2007–2008 to provide blueprints for fixing the regulatory gap.
BASE
In: https://ora.ox.ac.uk/objects/uuid:0c4cc51d-b2d3-4843-82ad-928e3b33e119
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality. In this paper we make three contributions. First, we assess the compatibility of fairness metrics used in machine learning against the aims and purpose of EU non-discrimination law. We show that the fundamental aim of the law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to 'level the playing field' and achieve substantive rather than merely formal equality. Based on this, we then propose a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of non-discrimination law. Specifically, we distinguish between 'bias preserving' and 'bias transforming' fairness metrics. Our classification system is intended to bridge the gap between non-discrimination law and decisions around how to measure fairness in machine learning and AI in practice. Finally, we show that the legal need for justification in cases of indirect discrimination can impose additional obligations on developers, deployers, and users that choose to use bias preserving fairness metrics when making decisions about individuals because they can give rise to prima facie discrimination. To achieve substantive equality in practice, and thus meet the aims of the law, we instead recommend using bias transforming metrics. To conclude, we provide concrete recommendations including a user-friendly checklist for choosing the most appropriate fairness metric for uses of machine ...
BASE
In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in artificial intelligence (AI) and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as "contextual equality." This Article makes three contributions. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Many of the concepts fundamental to bringing a claim, such as the composition of the disadvantaged and advantaged group, the severity and type of harm suffered, and requirements for the relevance and admissibility of evidence, require normative or political choices to be made by the judiciary on a case-by-case basis. We show that automating fairness or non-discrimination in Europe may be impossible because the law, by design, does not provide a static or homogenous framework suited to testing for discrimination in AI systems. Second, we show how the legal ...
BASE
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a 'right to explanation' of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a 'right to be informed'. Further, the ambiguity and limited scope of the 'right not to be subject to automated decision-making' contained in Article 22 (from which the alleged 'right to explanation' stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.
BASE
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
BASE
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
BASE
In: Taylor , P (ed.) , Allpress , S (ed.) , Carr , M , Lupu , E , Norton , J , Smith , L , Blackstock , J , Boyes , H , Hudson-Smith , A , Brass , I , Chizari , H , Cooper , R , Coulton , P , Craggs , B , Davies , N , De Roure , D , Elsden , M , Huth , M , Lindley , J , Marple , C , B , M , R , N , J , N , R , P , Radanliev , P , Rashid , A , D , S , Skatova , A , Taddeo , M , Tanczer , L , Vieira-Steiner , R , Watson , JDM , Wachter , S , Wakenshaw , S , Carvalho , G , Thompson , RJ & Westbury , PS 2018 , Internet of Things: realising the potential of a trusted smart world . London .
This report examines the policy challenges for the Internet of Things (IoT), and raises a broad range of issues that need to be considered if policy is to be effective and the potential economic value of IoT is harnessed. It builds on the Blackett review, The Internet of Things: making the most of the second digital revolution, adding detailed knowledge based on research from the PETRAS Cybersecurity of the Internet of Things Research Hub and input from Fellows of the Royal Academy of Engineering. The report targets government policymakers, regulators, standards bodies and national funding bodies, and will also be of interest to suppliers and adopters of IoT products and services.
BASE