In defense of the Turing test
In: AI & society: the journal of human-centred systems and machine intelligence, Band 35, Heft 4, S. 819-827
ISSN: 1435-5655
104 Ergebnisse
Sortierung:
In: AI & society: the journal of human-centred systems and machine intelligence, Band 35, Heft 4, S. 819-827
ISSN: 1435-5655
In: Philosophy & technology, Band 37, Heft 3
ISSN: 2210-5441
AbstractIn this paper, I raise the question whether an artificial intelligence can act morally. I first sketch and defend a general picture of what is at stake in this question. I then sketch and defend a behavioral test, known as the Moral Turing Test, as a good sufficiency test for an artificial intelligence acting morally. I end by discussing some general anticipated objections.
In: Information & Communications Technology Law, Band 21, Heft 2 (June 2012)
SSRN
Blog: American Enterprise Institute – AEI
A recent study by UC San Diego researchers looked at the degree to which several prominent AI models could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence. The results were telling.
The post Did AI Just Pass the Turing Test? appeared first on American Enterprise Institute - AEI.
In: Wisconsin Law Review, Vol. 2020
SSRN
Working paper
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 104, Heft 1, S. 59-69
ISSN: 1573-0964
In: AI & society: the journal of human-centred systems and machine intelligence, Band 39, Heft 3, S. 1445-1446
ISSN: 1435-5655
In: Postmodern culture, Band 20, Heft 3
ISSN: 1053-1920
In popular culture and in artificial intelligence, the Turing test has been understood as a means to distinguish between human and machine. Through a discussion of Richard Powers's Galatea 2.2: A Novel, Joseph Weizenbaum's computer program therapist ELIZA, and Emily Short's interactive fiction Galatea, this essay argues that our continued fascination with the Turing test can also be understood through Turing's introduction of the very possibility of misidentifying human for machine, and machine for human. This spectre of misidentification can open up potential recalibrations of human-machine interactivities, as well as the very categories of human and machine. Reading these literary and computational works alongside theoretical discussions of the Turing test, the essay attends to anthropomorphization as a productive metaphor in the Turing test. Anthropomorphization is a significant cultural force that shapes and undergirds multiple discursive spaces, operating varyingly therein to articulate conceptions of the human that are not reified and inviolable, but that continuously re-emerge through dynamic human-machine relations.
In: Transformative Works and Cultures: TWC, Band 39
ISSN: 1941-2258
This autoethnography traces Susan Stryker's articulation of transgender rage through the monstrous cyborg figure of the catfish (people who pretend to be someone else online), examining the passing politics of the Turing Test and its trans foundations. The author's disidentification with catfish characters in Glee, Pretty Little Liars, and Gossip Girl allows these characters to transmit and produce transgender rage, illustrating the strengths and weaknesses of taking a disidentificatory approach to transphobic texts.
In: Phenomenology and the cognitive sciences, Band 8, Heft 3, S. 361-370
ISSN: 1572-8676
SSRN
"Since its inception, Artificial Intelligence (AI) has been nurtured by the dream - cherished by some scientists while dismissed as unrealistic by others - that it will lead to forms of intelligence similar or alternative to human life. However, AI might be more accurately described as a range of technologies providing a convincing illusion of intelligence - in other words, not much the creation of intelligent beings, but rather of technologies that are perceived by humans as such. Deceitful Media argues that AI resides also and especially in the perception of human users. Exploring the history of AI from its origins in the Turing Test to contemporary AI voice assistants such as Alexa and Siri, Simone Natale demonstrates that our tendency to project humanity into things shapes the very functioning and implications of AI. He argues for a recalibration of the relationship between deception and AI that helps recognize and critically question how computing technologies mobilize specific aspects of users' perception and psychology in order to create what we call "AI." Introducing the concept of "banal deception," which describes deceptive mechanisms and practices that are embedded in AI, the book shows that deception is as central to AI's functioning as the circuits, software, and data that make it run. Delving into the relationship between AI and deception, Deceitful Media thus reformulates the debate on AI on the basis of a new assumption: that what machines are changing is primarily us, humans. If 'intelligent' machines might one day revolutionize life, the book provocatively suggests, they are already transforming how we understand and carry out social interactions"--
In: AI & society: the journal of human-centred systems and machine intelligence, Band 35, Heft 1, S. 283-287
ISSN: 1435-5655
In: RESPOL-D-23-01791
SSRN
In: New media & society: an international and interdisciplinary forum for the examination of the social dynamics of media and information change, Band 25, Heft 8, S. 2277-2280
ISSN: 1461-7315