Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Alternativ können Sie versuchen, selbst über Ihren lokalen Bibliothekskatalog auf das gewünschte Dokument zuzugreifen.
Bei Zugriffsproblemen kontaktieren Sie uns gern.
71 Ergebnisse
Sortierung:
In: European journal of risk regulation: EJRR ; at the intersection of global law, science and policy, Band 9, Heft 4, S. 655-676
ISSN: 2190-8249
The field of robotics is remarkably wide, with many social settings now entailing and increasingly requiring the use of robots to support a variety of human activities. Unsurprisingly, robots' form and shape, their level of intelligence and intended purpose can vary significantly depending on the relevant industry.1 Domestic robots are already a reality in a growing number of family homes. They include both humanoid robots which support those in need (such as the elderly, people with disabilities or children) and robots for household consumer markets, including domestic vacuum cleaners and grass-trimmers. Humanoid robots only account for a small fraction of the industry, with robotic arms for industrial automation being instead widespread.2
In: Social and cultural studies of robots and AI
In: Social and Cultural Studies of Robots and AI Ser.
Intro -- Dedication -- Foreword -- Preface -- Contents -- Chapter 1: Introduction: Challenging Sociality -- 1.1 Autism and the Machine -- 1.2 Absence of the Social -- 1.3 Pronouns and Relationality -- 1.4 Machine Sociality -- Theory of Mind and Mindblindness -- Extreme Male Brain and Systemising -- Bibliography -- Chapter 2: Autism, Social Attachment and Things -- 2.1 Autism Sociality -- 2.2 299.00 Autistic Disorder -- 2.3 Autism and Robots -- 2.4 Theatre and Robots -- 2.5 Anthropology and the Social -- Bibliography -- Chapter 3: The Experiment: The Effectiveness of a Humanoid Robot for Helping Children Develop Social Skills -- 3.1 Between the Social and Asocial -- 3.2 Robots -- 3.3 The School -- 3.4 The Protocols -- Method -- Analysis -- Play Assessment, Session Content and Schedule -- 3.5 The Experiment -- 3.6 The Results -- Bibliography -- Chapter 4: Echo and Narcissus and Pronoun-Reversal -- 4.1 The Myth of the Egocentric Child -- 4.2 Narcissus: From Myth to Sexual Pathology to Politics of the Individual -- 4.3 Echolalia, Pronoun-Reversal and I-You Interrelatedness -- 4.4 Robots to Humans as the Lake Is to Narcissus -- Bibliography -- Chapter 5: Attachment Theory and Autism -- 5.1 Bonds and Ties -- 5.2 Mother-Love to Mother-Hate -- 5.3 Children of the Wild -- 5.4 The Machine as Surrogate Mother -- 5.5 Attachment to Products -- Bibliography -- Chapter 6: Autism and the Machine -- 6.1 The Artificial Human -- 6.2 Man as a Machine -- 6.3 Autism as a Machine-State -- 6.4 Robot as Analogy to Autism -- Bibliography -- Chapter 7: Critical Autism Studies and Robot Therapy -- 7.1 A New Horizon -- 7.2 Labels and Language of Autism -- 7.3 It and Things -- 7.4 Robot Therapy -- 7.5 Recognition -- Bibliography -- Chapter 8: Conclusion: Terminating the Machine -- 8.1 The Social and Cultural Studies of Robots -- 8.2 I and Thou.
19th International Workshop of Physical Agents (WAF). Madrid (22-23 Noviembre 2018) ; ABSTRACT: This paper presents a personalized contingency feedback adaptation system that aims to encourage infants aged 6 to 8 months to gradually increase the peak acceleration of their leg movements. The ultimate challenge is to determine if a socially assistive humanoid robot can guide infant learning using contingent rewards, where the reward threshold is personalized for each infant using a reinforcement learning algorithm. The model learned from the data captured by wearable inertial sensors measuring infant leg movement accelerations in an earlier study. Each infant generated a unique model that determined the behavior of the robot. The presented results were obtained from the distributions of the participants' acceleration peaks and demonstrate that the resulting model is sensitive to the degree of differentiation among the participants; each participant (infant) should have his/her own learned policy. ; This work was supported by NSF award 1706964 (PI: Smith, Co-PI: Matarić). In addition, this work was developed during an international mobility program at the University of Southern California being also partially funded by the European Union ECHORD++ project (FP7-ICT-601116), the LifeBots project (TIN2015-65686-C5) and THERAPIST project (TIN2012-38079).
BASE
In 2017, Saudi Arabia declared Sophia, the humanoid robot, an official Saudi citizen. This event has highlighted the gender inequality issues in Saudi Arabia and has also led to many debates concerning human rights as well as women's rights, seen that the Kingdom is known for still being conservative and traditionally patriarchal, having strong religious values and still not regarding women as political persons or as equal to Saudi men. In other words, this case is particularly peculiar because of the active denial of women's human rights, their lack of full citizenship and the simultaneous concession of citizenship to a female-looking nonhuman being. Accordingly, through the analysis of the existent literature, news articles, laws and the conducted interview, the present project intends to study this unique decision and is based on an explorative and evaluative approach of the reasoning behind the citizenship process as well as of the comparison between Sophia's and Saudi women's rights in the Kingdom. The aims herein are to demonstrate why the robot was granted the Saudi citizenship, how women's rights have evolved over the course of the last years and to bridge the current gap in the literature that consists of the comparison between Sophia and Saudi women in different domains. In this way, the research enabled the demonstration of a lack of coherence between the liberties granted to a robot and Saudi women as well as the identification of a citizenship process that was not well thought out and different motives for Sophia's citizenship. ; Em 2017, a Arábia Saudita declarou Sophia, um robô humanoide, como cidadão Saudita oficial. Esta decisão voltou a realçar os problemas de desigualdade de género no país e levou a várias discussões relativamente a direitos das mulheres, já que o Reino é conhecido por ainda ser um país conservativo e tradicionalmente patriarcal, ter fortes valores religiosos e continuar a não tratar as mulheres de forma igualitária. Por outras palavras, este caso é particularmente paradoxal por causa da negação ativa de direitos humanos às mulheres, da sua falta de plena cidadania e da concessão simultânea deste estatuto a um ser não humano com aparência feminina. Através da análise da literatura existente, artigos de notícia, leis e a entrevista realizada, o presente projeto tenciona estudar esta decisão única e baseia-se em uma abordagem explorativa e avaliativa das razões que levaram a este processo de cidadania, assim como da comparação entre os direitos de Sophia e das mulheres Sauditas no Reino. Os objetivos são demonstrar as razões pelas quais a cidadania Saudita foi atribuída ao robô, explorar como os direitos das mulheres evoluíram ao longo dos últimos anos e preencher a atual lacuna na literatura que consiste na comparação entre Sophia e as mulheres Sauditas em diferentes aspetos. Deste modo, a pesquisa permitiu a demonstração de uma falta de coerência entre as liberdades concedidas a um robô e às mulheres Sauditas, a identificação de um processo de cidadania que não foi bem planeado e de vários motivos que justificam a cidadania de Sophia.
BASE
Ein Army Colonel empfindet Mitleid mit einem Roboter, der versuchsweise Landminen entschärft und deklariert den Test als inhuman (Garreau, 2007). Roboter bekommen militärische Beförderungen, Beerdigungen und Ehrenmedaillen (Garreau, 2007; Carpenter, 2013). Ein Schildkrötenroboter wird entwickelt, um Kindern beizubringen, Roboter gut zu behandeln (Ackermann, 2018). Der humanoide Roboter Sophia wurde erst kürzlich Saudi-Arabischer Staatsbürger und es gibt bereits Debatten, ob Roboter Rechte bekommen sollen (Delcker, 2018). Diese und ähnliche Entwicklungen zeigen schon jetzt die Bedeutsamkeit von Robotern und die emotionale Wirkung die diese auslösen. Dennoch scheinen sich diese emotionalen Reaktionen auf einer anderen Ebene abzuspielen, gemessen an Kommentaren in Internetforen. Dort ist oftmals die Rede davon, wieso jemand überhaupt emotional auf einen Roboter reagieren kann. Tatsächlich ist es, rein rational gesehen, schwierig zu erklären, warum Menschen mit einer leblosen ('mindless') Maschine mitfühlen sollten. Und dennoch zeugen nicht nur oben genannte Berichte, sondern auch erste wissenschaftliche Studien (z.B. Rosenthal- von der Pütten et al., 2013) von dem emotionalen Einfluss den Roboter auf Menschen haben können. Trotz der Bedeutsamkeit der Erforschung emotionaler Reaktionen auf Roboter existieren bislang wenige wissenschaftliche Studien hierzu. Tatsächlich identifizierten Kappas, Krumhuber und Küster (2013) die systematische Analyse und Evaluation sozialer Reaktionen auf Roboter als eine der größten Herausforderungen der affektiven Mensch-Roboter Interaktion. Nach Scherer (2001; 2005) bestehen Emotionen aus der Koordination und Synchronisation verschiedener Komponenten, die miteinander verknüpft sind. Motorischer Ausdruck (Mimik), subjektives Erleben, Handlungstendenzen, physiologische und kognitive Komponenten gehören hierzu. Um eine Emotion vollständig zu erfassen, müssten all diese Komponenten gemessen werden, jedoch wurde eine solch umfassende Analyse bisher noch nie durchgeführt (Scherer, 2005). Hauptsächlich werden Fragebögen eingesetzt (vgl. Bethel & Murphy, 2010), die allerdings meist nur das subjektive Erleben abfragen. Bakeman und Gottman (1997) geben sogar an, dass nur etwa 8% der psychologischen Forschung auf Verhaltensdaten basiert, obwohl die Psychologie traditionell als das 'Studium von Psyche und Verhalten' (American Psychological Association, 2018) definiert wird. Die Messung anderer Emotionskomponenten ist selten. Zudem sind Fragebögen mit einer Reihe von Nachteilen behaftet (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel und Murphy (2010) als auch Arkin und Moshkina (2015) plädieren für einen Multi-Methodenansatz um ein umfassenderes Verständnis von affektiven Prozessen in der Mensch-Roboter Interaktion zu erlangen. Das Hauptziel der vorliegenden Dissertation ist es daher, mithilfe eines Multi-Methodenansatzes verschiedene Komponenten von Emotionen (motorischer Ausdruck, subjektive Gefühlskomponente, Handlungstendenzen) zu erfassen und so zu einem vollständigeren und tiefgreifenderem Bild emotionaler Prozesse auf Roboter beizutragen. Um dieses Ziel zu erreichen, wurden drei experimentelle Studien mit insgesamt 491 Teilnehmern durchgeführt. Mit unterschiedlichen Ebenen der "apparent reality" (Frijda, 2007) sowie Macht / Kontrolle über die Situation (vgl. Scherer & Ellgring, 2007) wurde untersucht, inwiefern sich Intensität und Qualität emotionaler Reaktionen auf Roboter ändern und welche weiteren Faktoren (Aussehen des Roboters, emotionale Expressivität des Roboters, Behandlung des Roboters, Autoritätsstatus des Roboters) Einfluss ausüben. Experiment 1 basierte auf Videos, die verschiedene Arten von Robotern (tierähnlich, anthropomorph, maschinenartig), die entweder emotional expressiv waren oder nicht (an / aus) in verschiedenen Situationen (freundliche Behandlung des Roboters vs. Misshandlung) zeigten. Fragebögen über selbstberichtete Gefühle und die motorisch-expressive Komponente von Emotionen: Mimik (vgl. Scherer, 2005) wurden analysiert. Das Facial Action Coding System (Ekman, Friesen, & Hager, 2002), die umfassendste und am weitesten verbreitete Methode zur objektiven Untersuchung von Mimik, wurde hierfür verwendet. Die Ergebnisse zeigten, dass die Probanden Gesichtsausdrücke (Action Unit [AU] 12 und AUs, die mit positiven Emotionen assoziiert sind, sowie AU 4 und AUs, die mit negativen Emotionen assoziiert sind) sowie selbstberichtete Gefühle in Übereinstimmung mit der Valenz der in den Videos gezeigten Behandlung zeigten. Bei emotional expressiven Robotern konnten stärkere emotionale Reaktionen beobachtet werden als bei nicht-expressiven Robotern. Der tierähnliche Roboter Pleo erfuhr in der Misshandlungs-Bedingung am meisten Mitleid, Empathie, negative Gefühle und Traurigkeit, gefolgt vom anthropomorphen Roboter Reeti und am wenigsten für den maschinenartigen Roboter Roomba. Roomba wurde am meisten Antipathie zugeschrieben. Die Ergebnisse knüpfen an frühere Forschungen an (z.B. Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) und zeigen das Potenzial der Mimik für eine natürliche Mensch-Roboter Interaktion. Experiment 2 und Experiment 3 übertrugen die klassischen Experimente von Milgram (1963; 1974) zum Thema Gehorsam in den Kontext der Mensch-Roboter Interaktion. Die Gehorsamkeitsstudien von Milgram wurden als sehr geeignet erachtet, um das Ausmaß der Empathie gegenüber einem Roboter im Verhältnis zum Gehorsam gegenüber einem Roboter zu untersuchen. Experiment 2 unterschied sich von Experiment 3 in der Ebene der "apparent reality" (Frijda, 2007): in Anlehnung an Milgram (1963) wurde eine rein text-basierte Studie (Experiment 2) einer live Mensch-Roboter Interaktion (Experiment 3) gegenübergestellt. Während die abhängigen Variablen von Experiment 2 aus den Selbstberichten emotionaler Gefühle sowie Einschätzungen des hypothetischen Verhaltens bestand, erfasste Experiment 3 subjektive Gefühle sowie reales Verhalten (Reaktionszeit: Dauer des Zögerns; Gehorsamkeitsrate; Anzahl der Proteste; Mimik) der Teilnehmer. Beide Experimente untersuchten den Einfluss der Faktoren "Autoritätsstatus" (hoch / niedrig) des Roboters, der die Befehle erteilt (Nao) und die emotionale Expressivität (an / aus) des Roboters, der die Strafen erhält (Pleo). Die subjektiven Gefühle der Teilnehmer aus Experiment 2 unterschieden sich zwischen den Gruppen nicht. Darüber hinaus gaben nur wenige Teilnehmer (20.2%) an, dass sie den "Opfer"-Roboter definitiv bestrafen würden. Ein ähnliches Ergebnis fand auch Milgram (1963). Das reale Verhalten von Versuchsteilnehmern in Milgrams' Labor-Experiment unterschied sich jedoch von Einschätzungen hypothetischen Verhaltens von Teilnehmern, denen Milgram das Experiment nur beschrieben hatte. Ebenso lassen Kommentare von Teilnehmern aus Experiment 2 darauf schließen, dass das beschriebene Szenario möglicherweise als fiktiv eingestuft wurde und Einschätzungen von hypothetischem Verhalten daher kein realistisches Bild realen Verhaltens gegenüber Roboter in einer live Interaktion zeichnen können. Daher wurde ein weiteres Experiment (Experiment 3) mit einer Live Interaktion mit einem Roboter als Autoritätsfigur (hoher Autoritätsstatus vs. niedriger) und einem weiteren Roboter als "Opfer" (emotional expressiv vs. nicht expressiv) durchgeführt. Es wurden Gruppenunterschiede in Fragebögen über emotionale Reaktionen gefunden. Dem emotional expressiven Roboter wurde mehr Empathie entgegengebracht und es wurde mehr Freude und weniger Antipathie berichtet als gegenüber einem nicht-expressiven Roboter. Außerdem konnten Gesichtsausdrücke beobachtet werden, die mit negativen Emotionen assoziiert sind während Probanden Nao's Befehl ausführten und Pleo bestraften. Obwohl Probanden tendenziell länger zögerten, wenn sie einen emotional expressiven Roboter bestrafen sollten und der Befehl von einem Roboter mit niedrigem Autoritätsstatus kam, wurde dieser Unterschied nicht signifikant. Zudem waren alle bis auf einen Probanden gehorsam und bestraften Pleo, wie vom Nao Roboter befohlen. Dieses Ergebnis steht in starkem Gegensatz zu dem selbstberichteten hypothetischen Verhalten der Teilnehmer aus Experiment 2 und unterstützt die Annahme, dass die Einschätzungen von hypothetischem Verhalten in einem Mensch-Roboter-Gehorsamkeitsszenario nicht zuverlässig sind für echtes Verhalten in einer live Mensch-Roboter Interaktion. Situative Variablen, wie z.B. der Gehorsam gegenüber Autoritäten, sogar gegenüber einem Roboter, scheinen stärker zu sein als Empathie für einen Roboter. Dieser Befund knüpft an andere Studien an (z.B. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), eröffnet neue Erkenntnisse zum Einfluss von Robotern, zeigt aber auch auf, dass die Wahl einer Methode um Empathie für einen Roboter zu evozieren eine nicht triviale Angelegenheit ist (vgl. Geiskkovitch et al., 2016; vgl. Milgram, 1965). Insgesamt stützen die Ergebnisse die Annahme, dass die emotionalen Reaktionen auf Roboter tiefgreifend sind und sich sowohl auf der subjektiven Ebene als auch in der motorischen Komponente zeigen. Menschen reagieren emotional auf einen Roboter, der emotional expressiv ist und eher weniger wie eine Maschine aussieht. Sie empfinden Empathie und negative Gefühle, wenn ein Roboter misshandelt wird und diese emotionalen Reaktionen spiegeln sich in der Mimik. Darüber hinaus unterscheiden sich die Einschätzungen von Menschen über ihr eigenes hypothetisches Verhalten von ihrem tatsächlichen Verhalten, weshalb videobasierte oder live Interaktionen zur Analyse realer Verhaltensreaktionen empfohlen wird. Die Ankunft sozialer Roboter in der Gesellschaft führt zu nie dagewesenen Fragen und diese Dissertation liefert einen ersten Schritt zum Verständnis dieser neuen Herausforderungen. ; An Army Colonel feels sorry for a robot that defuses landmines on a trial basis and declares the test inhumane (Garreau, 2007). Robots receive military promotions, funerals and medals of honor (Garreau, 2007; Carpenter, 2013). A turtle robot is being developed to teach children to treat robots well (Ackermann, 2018). The humanoid robot Sophia recently became a Saudi Arabian citizen and there are now debates whether robots should have rights (Delcker, 2018). These and similar developments already show the importance of robots and the emotional impact they have. Nevertheless, these emotional reactions seem to take place on a different level, judging by comments in internet forums alone: Most often, emotional reactions towards robots are questioned if not denied at all. In fact, from a purely rational point of view, it is difficult to explain why people should empathize with a mindless machine. However, not only the reports mentioned above but also first scientific studies (e.g. Rosenthal- von der Pütten et al., 2013) bear witness to the emotional influence of robots on humans. Despite the importance of researching emotional reactions towards robots, there are few scientific studies on this subject. In fact, Kappas, Krumhuber and Küster (2013) identified effective testing and evaluation of social reactions towards robots as one of the major challenges of affective Human-Robot Interaction (HRI). According to Scherer (2001; 2005), emotions consist of the coordination and synchronization of different components that are linked to each other. These include motor expression (facial expressions), subjective experience, action tendencies, physiological and cognitive components. To fully capture an emotion, all these components would have to be measured, but such a comprehensive analysis has never been performed (Scherer, 2005). Primarily, questionnaires are used (cf. Bethel & Murphy, 2010) but most of them only capture subjective experiences. Bakeman and Gottman (1997) even state that only about 8% of psychological research is based on behavioral data, although psychology is traditionally defined as the 'study of the mind and behavior' (American Psychological Association, 2018). The measurement of other emotional components is rare. In addition, questionnaires have a number of disadvantages (Austin, Deary, Gibson, McGregor, Dent, 1998; Fan et al., 2006; Wilcox, 2011). Bethel and Murphy (2010) as well as Arkin and Moshkina (2015) argue for a multi-method approach to achieve a more comprehensive understanding of affective processes in HRI. The main goal of this dissertation is therefore to use a multi-method approach to capture different components of emotions (motor expression, subjective feeling component, action tendencies) and thus contribute to a more complete and profound picture of emotional processes towards robots. To achieve this goal, three experimental studies were conducted with a total of 491 participants. With different levels of 'apparent reality' (Frijda, 2007) and power/control over the situation (cf. Scherer & Ellgring, 2007), the extent to which the intensity and quality of emotional responses to robots change were investigated as well as the influence of other factors (appearance of the robot, emotional expressivity of the robot, treatment of the robot, authority status of the robot). Experiment 1 was based on videos showing different types of robots (animal-like, anthropomorphic, machine-like) in different situations (friendly treatment of the robot vs. torture treatment) while being either emotionally expressive or not. Self-reports of feelings as well as the motoric-expressive component of emotion: facial expressions (cf. Scherer, 2005) were analyzed. The Facial Action Coding System (Ekman, Friesen, & Hager, 2002), the most comprehensive and most widely used method for objectively assessing facial expressions, was utilized for this purpose. Results showed that participants displayed facial expressions (Action Unit [AU] 12 and AUs associated with positive emotions as well as AU 4 and AUs associated with negative emotions) as well as self-reported feelings in line with the valence of the treatment shown in the videos. Stronger emotional reactions could be observed for emotionally expressive robots than non-expressive robots. Most pity, empathy, negative feelings and sadness were reported for the animal-like robot Pleo while watching it being tortured, followed by the anthropomorphic robot Reeti and least for the machine-like robot Roomba. Most antipathy was attributed to Roomba. The findings are in line with previous research (e.g., Krach et al., 2008; Menne & Schwab, 2018; Riek et al., 2009; Rosenthal-von der Pütten et al., 2013) and show facial expressions' potential for a natural HRI. Experiment 2 and Experiment 3 transferred Milgram's classic experiments (1963; 1974) on obedience into the context of HRI. Milgram's obedience studies were deemed highly suitable to study the extent of empathy towards a robot in relation to obedience to a robot. Experiment 2 differed from Experiment 3 in the level of 'apparent reality' (Frijda, 2007): based on Milgram (1963), a purely text-based study (Experiment 2) was compared with a live HRI (Experiment 3). While the dependent variables of Experiment 2 consisted of self-reports of emotional feelings and assessments of hypothetical behavior, Experiment 3 measured subjective feelings and real behavior (reaction time: duration of hesitation; obedience rate; number of protests; facial expressions) of the participants. Both experiments examined the influence of the factors "authority status" (high / low) of the robot giving the orders (Nao) and the emotional expressivity (on / off) of the robot receiving the punishments (Pleo). The subjective feelings of the participants from Experiment 2 did not differ between the groups. In addition, only few participants (20.2%) stated that they would definitely punish the "victim" robot. Milgram (1963) found a similar result. However, the real behavior of participants in Milgram's laboratory experiment differed from the estimates of hypothetical behavior of participants to whom Milgram had only described the experiment. Similarly, comments from participants in Experiment 2 suggest that the scenario described may have been considered fictitious and that assessments of hypothetical behavior may not provide a realistic picture of real behavior towards robots in a live interaction. Therefore, another experiment (Experiment 3) was performed with a live interaction with a robot as authority figure (high authority status vs. low) and another robot as "victim" (emotional expressive vs. non-expressive). Group differences were found in questionnaires on emotional responses. More empathy was shown for the emotionally expressive robot and more joy and less antipathy was reported than for a non-expressive robot. In addition, facial expressions associated with negative emotions could be observed while subjects executed Nao's command and punished Pleo. Although subjects tended to hesitate longer when punishing an emotionally expressive robot and the order came from a robot with low authority status, this difference did not reach significance. Furthermore, all but one subject were obedient and punished Pleo as commanded by the Nao robot. This result stands in stark contrast to the self-reported hypothetical behavior of the participants from Experiment 2 and supports the assumption that the assessments of hypothetical behavior in a Human-Robot obedience scenario are not reliable for real behavior in a live HRI. Situational variables, such as obedience to authorities, even to a robot, seem to be stronger than empathy for a robot. This finding is in line with previous studies (e.g. Bartneck & Hu, 2008; Geiskkovitch et al., 2016; Menne, 2017; Slater et al., 2006), opens up new insights into the influence of robots, but also shows that the choice of a method to evoke empathy for a robot is not a trivial matter (cf. Geiskkovitch et al., 2016; cf. Milgram, 1965). Overall, the results support the assumption that emotional reactions to robots are profound and manifest both at the subjective level and in the motor component. Humans react emotionally to a robot that is emotionally expressive and looks less like a machine. They feel empathy and negative feelings when a robot is abused and these emotional reactions are reflected in facial expressions. In addition, people's assessments of their own hypothetical behavior differ from their actual behavior, which is why video-based or live interactions are recommended for analyzing real behavioral responses. The arrival of social robots in society leads to unprecedented questions and this dissertation provides a first step towards understanding these new challenges. ; Are there emotional reactions towards social robots? Could you love a robot? Or, put the other way round: Could you mistreat a robot, tear it apart and sell it? Media reports people honoring military robots with funerals, mourning the "death" of a robotic dog, and granting the humanoid robot Sophia citizenship. But how profound are these reactions? Three experiments take a closer look on emotional reactions towards social robots by investigating the subjective experience of people as well as the motor expressive level. Contexts of varying degrees of Human-Robot Interaction (HRI) sketch a nuanced picture of emotions towards social robots that encompass conscious as well as unconscious reactions. The findings advance the understanding of affective experiences in HRI. It also turns the initial question into: Can emotional reactions towards social robots even be avoided?
BASE
In: AI & SOCIETY
Abstract A humanoid robot named 'Sophia' has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia's citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call 'political choreography': drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of 'embodied intelligence' used in the context of social robotics and AI. In this way, we put the discussions about the robot's rights or citizenship in the context of AI politics and economics.
A humanoid robot named 'Sophia' has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence (AI). Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia's citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of AI robots, we analyse the performativity of Sophia from the perspective of what we call 'political choreography': drawing on phenomenological approaches to performance-oriented philosophy of technology. This paper proposes to interpret and discuss the world tour of Sophia as a political choreography that boosts the rise of the social robot market, rather than a statement about robot citizenship or artificial intelligence. We argue that the media performances of the Sophia robot were choreographed to advance specific political interests. We illustrate our philosophical discussion with media material of the Sophia performance, which helps us to explore the mechanisms through which the media spectacle functions hand in hand with advancing the economic interests of technology industries and their governmental promotors. Using a phenomenological approach and attending to the movement of robots, we also criticize the notion of 'embodied intelligence' used in the context of social robotics and AI. In this way, we put the discussions about the robot's rights or citizenship in the context of AI politics and economics. ; publishedVersion ; Peer reviewed
BASE
PUBLISHED ; Emotional expressivity can boost trust in human-human and human-machine interaction. As a multimodal phenomenon, previous research argued that a mismatch in the expressive channels provides evidence of joint audio-video emotional processing. However, while previous work studied this from the point of view of emotion recognition and processing, not much is known about what effect a multimodal agent would have on a human-agent interaction task. Also, agent appearance could influence this interaction too. Here we manipulated the agent's multimodal emotional expression ("smiling face" and "smiling voice", or both) and agent type (photorealistic or cartoon-like virtual human) and assessed people's trust toward this agent. We measured trust using a mixed-methods approach, combining behavioural data from a survival task, questionnaire ratings and qualitative comments. These methods gave different results: while people commented on the importance of emotional expressivity in the agent's voice, this factor had limited influence on trusting behaviours; while people rated the cartoon-like agent on several traits higher than the photorealistic one, the agent's style also was not the most influential feature on people's trusting behaviour. These results highlight the contribution of a mixed-methods approach in human-machine interaction, as both explicit and implicit perception and behaviour will contribute to the success of the interaction. ; unding text #1 The research was funded by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 713567, and by the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13/RC/2016) and is co-funded by the European Regional Development Fund. The second author received funding from the Science Foundation Ireland, Game Face (13/CDA/2135) project. Funding text #2 The research was funded by the European Union?s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement No. 713567, and by the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13/RC/2016) and is co-funded by the European Regional Development Fund. The second author received funding from the Science Foundation Ireland, Game Face (13/CDA/2135) project.
BASE
Cover -- Title page -- Copyright page -- List of Contributors -- Contents -- Chapter 1 - Africa's Postcolonial Challenge: The Politics and Resilience of Underdevelopment -- Introduction -- Understanding underdevelopment in Africa -- A journey through the book -- References -- Chapter 2 - Hearing the Footfalls of Humanoid Robots: Technoscience, (Un-)employment and the Future of "Development" in Twenty-First Century Africa -- Introduction -- Humanoid robots, transhumanism, posthumanism and "fated development" in Africa -- Humanoid robots, (un-) employment, zvikwambo and discourses of witchcraft -- Conclusion -- References -- Chapter 3 - China for Africa, Africa for China or Africa and China for Each Other? Assessing Chinese Development Collaboration for African Transformations -- Introduction: China and Africa -- Africa and China: An emerging relationship -- The China-Africa relationship: Critics and supporters -- Foreign Direct Investment in sub-Saharan Africa: China's investments -- China's aid assistance to sub-Saharan Africa -- SEZs and manufactures in Africa -- Understanding China's rapid industrial development: Unravelling the China Model for African contexts -- China's development success story -- Reform, State planning and organisation: The secret development factor? -- Applying the Chinese development model: China-Africa industrial development strategies -- References -- Chapter 4 - Logical Fallacies in African Development Debates -- Introduction -- Conceptual Analysis -- Fallacy of composition: Conceptualising development as exclusive economic or scientific growth -- Fallacy of Hasty generalisation about African conditions and situations -- Fallacy of false dichotomy: Tradition versus modernity -- Fallacy of False cause: External versus internal development problems -- Fallacy of irrelevant appeal to authority of the gun.
"Die Liebe zwischen Mensch und Maschine ist nicht neu. Erstmals kann sie allerdings auch sexuell ausgelebt werden. Immer mehr elektronische und digitale Sexpuppen drängen auf den Markt und verändern die Bedingungen der menschlichen Sexualität. Was heißt es nun, eine Maschine zu begehren? Die Literaturwissenschaftlerin Sophie Wennerscheid plädiert für einen entspannten Umgang mit der Technik" (Die besten Sachbücher im März 2019, Platz 9). Sexualität und Technik -- Wenn dein Sextoy dir zuzwinkert -- Self-Sex als Problem und Möglichkeit -- Eigenartiges Begehren -- Von der Hinwendung zum Affekt und seiner Kommerzialisierung -- Vom Naturzwang befreit -- Cupido auf neuen Wegen. Assistierte Reproduktion -- Künstliche Gebärmütter, Roboterkinder und Reborn Babys -- Unruhe im Menschenpark -- Klonfantasien. Oder: Von der Erotik des Gleichen -- SEX-TECH -- Liebe und Leidenschaft auf Knopfdruck? -- Dinge der Lust. Sextoys und Prothesensex -- Kontrollfantasien. Vom Steuermann zum Cybersex -- Sexuelle Gewalt in der virtuellen Welt -- BEGEGNUNG MIT DEM ANDEREN -- Schöpfungsmythen und Fantasien der Hingabe -- Vom Anderssein. Oder: In der Dunkelkammer des Selbst -- Die Puppe als Partner*in? -- Das Unheimliche -- Berührungen zwischen Mensch und Maschine -- Die Roboter kommen -- Soziale Roboter -- Sexroboter. Pro und Contra Sex machines -- Affekt und Berührung im Science-Fiction-Film -- Galateias Schwestern. Die neuen Held*innen? -- Warum Sex mit einem Roboter, der immer kann, langweilig ist -- Menschlich, allzu menschlich -- Gewalt gegen Roboter und Aufstand der Roboter -- Maschinenliebe -- Eigenartige Verwandtschaft
According to the Swedish science fiction TV series Äkta människor (Real Humans, SVT and Matador film 2012-2014), humanoid robots called "hubots" are replacing the human workforce in care work and assembly line industries. Against the backdrop of current debates about immigration and citizenship in the Nordic countries, this article does a close, contextual reading of the series, exploring how the hubots influence work and family life. We are particularly interested in how hubots tie in with the cultural circulation of affect in relation to Otherness and how responses towards the "not-quite" human or dehumanized Other are negotiated in the present-day Nordic cultural imaginaries. What kinds of affects are at stake in how Äkta människor takes up and interacts with debates about immigrant workers and the "not-quite" human? To answer these questions, the article develops the notion of "affective imaginaries" as an analytical tool for understanding the exchange between popular culture and political debate. ; peerReviewed
BASE
In: Information, technology & people, Band 32, Heft 1, S. 47-67
ISSN: 1758-5813
Purpose
The purpose of this paper is to investigate the publicity around the implementation of the Zora robot in elderly-care services in Lahti, Finland. The aim is to discover opinions concerning the use of robots in elderly care as well as the arguments and justifications behind those opinions. Zora is a humanoid robot intended to promote mobility and rehabilitation. The Lahti pilot was the first Zora pilot in Finland in public elderly-care services. It received much publicity, both regionally and nationally.
Design/methodology/approach
This study is based on an empirical case study on the implementation of the Zora robot in elderly-care services. The data consist of interviews with personnel who operated Zora and comments from the general public about the "Zora" robot. Two data sources were used: 107 comments were collected from online and print media, and the personnel (n=39) who worked with Zora were interviewed. The data were analysed by means of interpretative content analysis.
Findings
The results show that public opinion is mainly negative, but that the commentators apparently have little information about the robot and its tasks. The personnel had more positive views; they saw it as a recreational tool, not as a replacement for their own roles.
Originality/value
There is clearly a need for more information, for a better informed discussion on how robots can be used in elderly care and how to involve the general public in this discussion in a constructive way.
In: Design Ecologies, Band 6, Heft 1, S. 12-31
ISSN: 2043-0698
Abstract
The phrase 'terminal identity' has been used by theorists of cyber- and cypher-punk to explain the constitution of posthuman identity as projected by works of science fiction. I will now consider its application and potential adaptation to the complex and emerging identity constitution of non-fictive, material, humanoid robots developed in present-day research facilities around the world. Particularly, I refer herein to the realistic androids or 'very humanlike robots' produced by the Japan-based projects in 'android science'. To generate humans' affinity with these aesthetically very humanlike androids, engineers currently rely primarily on the display architecture of the human. Thus, 'very humanlike robots' can perhaps be gainfully thought as 'terminal emulators'.