The AI Special Issue: An Introduction
In: The RUSI journal: publication of the Royal United Services Institute for Defence and Security Studies, Volume 164, Issue 5-6, p. 10-11
ISSN: 1744-0378
3 results
Sort by:
In: The RUSI journal: publication of the Royal United Services Institute for Defence and Security Studies, Volume 164, Issue 5-6, p. 10-11
ISSN: 1744-0378
Artificial intelligence (AI) systems employ learning algorithms which adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behaviour, a unit's factory model behaviour can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.Trust has no accepted definition, but [Rousseau 1998] define it as "a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behaviour of another". Trust is an attitude that an agent will behave as expected and can be relied upon to reach its goal. Trust breaks down after an error or a misunderstanding between the agent and the trusting individual. The psychological state of trust in AI is an emergent property of a complex system, usually involving many cycles of design, training, deployment, measurement of performance, regulation, redesign and retraining.Trust matters, especially in critical sectors such as healthcare, defence & security where duty of care is foremost. Trustworthiness must be planned rather than an afterthought. We can trust in AI, such as when a doctor uses algorithms to screen medical images [NHS-X 2021]. We can also trust with AI, such as when journalists reference a social network algorithm to analyse sources of a news story [WeVerify 2021]. Growing adoption of AI into institutional systems relies on citizens to trust in these systems and have confidence in the way these systems are designed and regulated.Regional approaches for managing trust in AI have recently emerged, leading to different regulatory regimes in the United States, the European region and China. We review these regulatory divergences. Within the European region, research programs are examining how trust impacts user acceptance of AI. Examples include the UKRI Trustworthy Autonomous Systems Hub , the French Confidiance.ai project and the German AI Breakthrough Hub . Europe appears to be developing a "third way" alongside the United States and China [Morton 2021].Healthcare contains many examples of AI applications including online harm risk identification [ProTechThem 2021], mental health behaviour classification [SafeSpacesNLP 2021] and automated blood testing [Pinpoint 2021]. In defence and security, examples include combat management systems [DSTL 2021] and using machine learning to identify chemical and biological contamination [Alan Turing Institute 2021]. There is a growing awareness within critical sectors [Kerasidou 2020] [Taddeo 2019] that AI systems need to address a "public trust deficit" by adding reliability into the perception of AI. In the next two sections we discuss research highlights around the key trends of building safer and more reliable AI systems to engender trust and putting humans in the loop with regards AI systems and teams. We conclude with a discussion about applications and what we consider the future outlook is in this area.
BASE
In: The RUSI journal: publication of the Royal United Services Institute for Defence and Security Studies, Volume 164, Issue 5-6, p. 120-144
ISSN: 1744-0378