On December 8, 2022, in a case relating to a de-referencing request to an internet search engine operator on the basis of inaccurate information, the Grand Chamber of the Court of Justice of the European Union (CJEU) delivered its judgment (the Judgment) on the interpretation of the right to erasure (right to be forgotten) under Article 17 of the EU General Data Protection Regulation 2016/679 (GDPR), and the rights of access and objection under Articles 12 and 14 of the EU Directive 95/46/EC (the Directive), in light of fundamental rights to privacy, protection of personal data, and freedom of expression and information under Articles 7, 8, and 11 of the Charter of Fundamental Rights of the European Union (the Charter). The case establishes that the burden of proof is on the person requesting de-referencing to show manifest inaccuracy of information, but this does not require them to seek a judicial remedy against the website publisher before requesting de-referencing. Although the search engine operator is obliged to carry out checks to confirm the merits of the request, it has no obligation to investigate the facts and probe further with the website publisher. However, it must de-reference where relevant and sufficient evidence is submitted to show manifest inaccuracy, and it must display a warning where it is made aware of judicial proceedings.
On November 23, 2022, the Supreme Court of the United Kingdom delivered its judgment on whether the Scottish Parliament has legislative competence to introduce a Scottish Independence Referendum Bill (the Bill) to hold a referendum in Scotland asking the question, "Should Scotland be an independent country?", or whether this relates to reserved matters to the Union of the Kingdoms of Scotland and England and the Parliament of the United Kingdom under Schedule 5 to the Scotland Act of 1998 (the Act). The Court unanimously concluded that the proposed Bill relates to reserved matters and cannot be lawfully legislated by the Scottish Parliament. The case raises issues about the extent of legislative competence of a UK devolved nation, and whether it can lawfully exercise the right to self-determination under international law when a constitutional framework does not recognize legislative competence to hold an independence referendum.
Military investment in robotics technology is leading to development and use of robot weapons, which are machines with varying degrees of autonomy in target, attack, and infliction of lethal harm (i.e. injury, suffering or death). Examples of robot weapons include automated weapons systems, unmanned armed aerial vehicles (UAV), remotely-controlled robotic soldiers, bio-augmentation, and 3D printed weapons. Robot weapons generally fall into one of two categories: semi-autonomous, involving levels of automation and remotely controlled human input (e.g. UAV or "drones"); and autonomous, involving higher levels of independent thinking as regards acquiring, tracking, selecting and attacking targets, without the need for human input (e.g. US Navy X-47B UAV with autonomous take-off, landing, and aerial refuelling capability). The trend is clearly towards developing autonomous weapons. Development of new weapons aimed at reducing costs and casualties is not a new phenomenon in warfare. Technological advances have created greater distance between the soldier and the battlefield. A bullet fired from a rifle handled by a human has been superseded by a missile fired from a remotely controlled or autonomous machine. So what makes robot weapons different? What particular challenge do they pose international law? Although autonomous weapons may be employed to attack non-human targets, such as state infrastructure, here I am primarily concerned with their use for lethal attacks against humans. In this paper I focus on autonomous weapons and their impact on human dignity under two of Kant's conceptual strands: 1) human dignity as a status entailing rights and duties; and 2) human dignity as respectful treatment. Under the first strand I explore how use of autonomous weapons denies the right of equality of persons and diminishes the duty not to harm others. In the second strand I consider how replacing human combatants with autonomous weapons debases human life and does not provide respectful treatment. Reference is made to contemporary development of Kant's conceptual strands in ICJ and other international jurisprudence recognising human dignity as part of "elementary considerations of humanity" in war and peace.
In: Ulgen, Ozlem (2019) Can public and voluntary acts of consent confer legitimacy on the EU? In: The Euro-Crisis as a Multi-Dimensional Systemic Failure of the EU: the Crisis Behind the Crisis. Cambridge University Press, UK.
Legitimacy is essential for any polity that seeks to exert law-making authority over its people. Although the EU is not a single state it is a polity that has to obtain legitimacy for its power to make laws affecting some 500 million people across 28 member states (soon to be 27 pending UK exit). And yet in the eyes of EU citizens the Eurozone crisis and Brexit vote call into question the EU's legitimacy as it cannot guarantee prosperity for all its peoples or shield against economic and political uncertainty. There is growing unease and disaffection among southern EU states' voters, and divisions between core/peripheral member states, with emerging alternative popular representation structures (e.g. Podemos in Spain) and reappraisal of the EU among pro-EU politicians (e.g. British leftwing). In this context, "core" member state refers to the advanced economies and strong democracies including the original founding members (Belgium, France, Germany, Italy, Luxembourg, and the Netherlands), and new members from the enlargement period 1973 to 1995 (Denmark, Ireland, the UK, Greece Portugal, Spain, Austria, Finland, Sweden). "Peripheral" member state denotes the southern, central, eastern European states that joined from 2004 onwards (Cyprus, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, Slovenia, Bulgaria, Romania, Croatia). Added to this is a lack of debate and public awareness of what the EU stands for and its practical benefits. It is, therefore, important to understand how legitimacy is (or is not) created and maintained in the EU. A form of legitimacy may be conferred by EU citizens engaging in public and voluntary acts of consent. These are acts taken by individuals and groups which may confer legitimacy on the law-making authority of an entity (e.g. voting in European Parliamentary elections; national referendums on EU matters). Political theorists, such as Beetham, have explored public and voluntary acts of consent in the context of understanding how state power is legitimised. Using Beetham's "normative structure of legitimacy", especially the third component of "expressed consent", this chapter considers whether public and voluntary acts of consent may confer legitimacy on the EU. Part I explores the relevance of Beetham's "normative structure of legitimacy" under the criteria of rule-based validity, justifiability of power rules, and expressed consent. Part II evaluates expressed consent in three types of public and voluntary acts of consent: national referendums, with particular reference to the UK Brexit referendum of 23 June 2016, and Greek bailout referendum of 5 July 2015; the European Citizens' Initiative under Article 11(4) TEU; and civil society engagement under Article 11(2) TEU.
Seventy years after the adoption of the four Geneva Conventions on 12 August 1949 the changing character of warfare is influenced by, among other things, technological innovations such as artificial intelligence and robotics. States are integrating new technologies into the military sphere for both defensive and offensive capabilities. This impacts on military doctrines, weaponry, and operational strategies. Under the auspices of the 1980 UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which may be Deemed to be Excessively Injurious or to have Indiscriminate Effects, the UN Group of Governmental Experts on Lethal Autonomous Weapons Systems is currently deliberating on the legal and ethical issues regarding autonomous weapons, and whether new legally binding or non-legally binding rules should be established regarding their use, restriction, or prohibition. In this context, it is worth reviewing the role and significance of Geneva law provisions in relation to technological innovations in methods and means of warfare.
In: Ulgen, Ozlem (2017) Pre-deployment Common Law Duty of Care and Article 36 Obligations in relation to Autonomous Weapons: Interface between Domestic Law and International Humanitarian Law? The Military Law and The Law of War Review, 55 (1). pp. 1-15. ISSN 0020-5893
In an age of high-tech and remotely controlled warfare new weapons are being developed to operate autonomously (i.e. without human input or oversight) in relation to critical functions of acquiring, tracking, selecting, and attacking targets. While debates continue as to whether autonomous weapons are ethical or legal, pre-deployment obligations in relation to their use exist in both domestic law and international humanitarian law. First, in Smith and Others v MOD (2013) the UK Supreme Court held that combat immunity does not apply in circumstances where military operations or acts take place before actual deployment or active combat. This raises interesting questions about types of pre-deployment activities which may be subject to a duty of care on the part of States towards members of their own armed forces and potentially others. Would Government decisions in the development, testing, and procurement of autonomous weapons attract a duty of care and fall outside the doctrine of combat immunity? Could this duty of care extend to civilians injured as a result of inadequate pre-deployment due diligence of autonomous weapons? Second, Article 36 of Additional Protocol I to the Geneva Conventions (API) provides a pre-deployment review procedure for new weapons. At the pre-deployment stage where a new weapon is being developed and tested, Article 36 requires continuous assessment of whether normal or expected use of the weapon is prohibited by API or any other rule of international law, including principles of legitimate targeting, proportionality and unnecessary suffering. Could this be a duty of due diligence? What does it entail? How is it to be enforced? This paper explores the interface between a pre-deployment common law duty of care and Article 36 pre-deployment review procedure in relation to autonomous weapons. It considers whether State and private actors (e.g. manufacturers of autonomous weapons; and telecommunications companies) involved in pre-deployment activities may owe a duty of care to combatants and civilians. Part II examines the UK Supreme Court's judgment in Smith and Others v MOD to identify the basis upon which a pre-deployment common law duty of care can be established and extended to pre-deployment activities relating to autonomous weapons. Part III attempts to map out the nature of a pre-deployment duty of care in relation to autonomous weapons and the content of specific duties. Part IV then moves on to examine the nature of the pre-deployment review procedure under Article 36 API, and the extent to which it establishes a legally enforceable obligation in relation to autonomous weapons. Although Article 36 does not provide for international supervision it does require domestic implementation of the obligation to review. Consideration is given as to whether it can be enforced at the domestic level by combatants and civilians.
Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (e.g. unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (e.g. lethal autonomous weapons killing human combatants). Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? Whilst Kant may be familiar to international lawyers for setting restraints on the use of force and rules for perpetual peace, his foundational work on ethics provides an inclusive moral philosophy for assessing ethical conduct of individuals and states and, thus, is relevant to discussions on the use and development of artificial intelligence and robotics. His philosophy is inclusive because it incorporates justifications for morals and legitimate responses to immoral conduct, and applies to all human agents irrespective of whether they are wrongdoers, unlawful combatants, or unjust enemies. Humans are at the centre of rational thinking, action, and norm-creation so that the rationale for restraints on methods and means of warfare, for example, is based on preserving human dignity as well as ensuring conditions for perpetual peace among states. Unlike utilitarian arguments which favour use of autonomous weapons on the basis of cost-benefit reasoning or the potential to save lives, Kantian ethics establish non-consequentialist and deontological rules which are good in themselves to follow and not dependent on expediency or achieving a greater public good. Kantian ethics make two distinct contributions to the debate. First, they provide a human-centric ethical framework whereby human existence and capacity are at the centre of a norm-creating moral philosophy guiding our understanding of moral conduct. Second, the ultimate aim of Kantian ethics is practical philosophy that is relevant and applicable to achieving moral conduct. I will seek to address the moral questions outlined above by exploring how core elements of Kantian ethics relate to use of artificial intelligence and robotics in the civilian and military spheres. Section 2 sets out and examines core elements of Kantian ethics: the categorical imperative; autonomy of the will; rational beings and rational thinking capacity; and human dignity and humanity as an end in itself. Sections 3-7 consider how these core elements apply to artificial intelligence and robotics with discussion of fully autonomous and human-machine rule-generating approaches; types of moral reasoning; the difference between 'human will' and 'machine will'; and respecting human dignity.
Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? This paper seeks to address some ethical issues surrounding the development and use of artificial intelligence and robotics in the civilian and military spheres. It explores the implications of fully autonomous and human-machine rule-generating approaches, the difference between "human will" and "machine will, and between machine logic and human judgment.
This article considers whether greater accountability for EU supranational decision-making can be achieved through a combination of member states' legislative processes and EU treaty-based mechanisms. The EU is formed by member states' national consent through treaty ratification and a system of domestic pre-legislative controls on consent— parliamentary approval, public consultation and referendum—which operates to limit the nature and extent of EU law. Using the UK as an example to compare with other member states, the article contends that such domestic controls are prerequisites to national incorporation of EU law and strengthen democratic accountability. Consent alone, however, does not provide an adequate basis for accountability of supranational decisions; EU constitutional principles of citizenship, democracy, and political rights illustrate how the EU fulfills a role as protector of rights. The article further argues that the EU's protector role represents partial legitimacy and accountability for supranational decisions. Greater legitimacy and accountability derives from national parliaments' pre-legislative controls under EU law—scrutinizing legislation, monitoring subsidiarity, and exercising veto powers. The article concludes that if these controls are exercised properly, they represent powerful accountability mechanisms.