In: in Nikos Koutras and Niloufer Selvadurai (eds), Recreating Creativity, Reinventing Inventiveness - International Perspectives on AI and IP Governance (Routledge, 2023, Forthcoming)
This commentary reflects on how creativity is dehumanised (and rehumanised) and how its labour aspects are hindered (and highlighted) in the three recent developments in our understanding of arts, culture and creativity: the creative industries; AI creativity; and creativity in everyday life. The creative industries discourse instrumentalises and dehumanises creativity by hiding labour perspectives and treating creativity as human capital and a generator of IP. Meanwhile, contemplating AI creativity helps us to look beyond the economic paradigm and consider key traits of human creativity and the creation process, some aspects of which are successfully emulated by AI. Yet, we also observe how AI dissociates creativity from human agency and how its cost-cutting effect can challenge human creators in many sectors. Finally, the idea of everyday creativity effectively rehumanises and democratises creativity; however, it not only lacks labour perspectives but also hinders them.
AbstractPeer feedback is introduced as an effective learning strategy, especially in large-size classes where teachers face high workloads. However, for complex tasks such as writing an argumentative essay, without support peers may not provide high-quality feedback since it requires a high level of cognitive processing, critical thinking skills, and a deep understanding of the subject. With the promising developments in Artificial Intelligence (AI), particularly after the emergence of ChatGPT, there is a global argument that whether AI tools can be seen as a new source of feedback or not for complex tasks. The answer to this question is not completely clear yet as there are limited studies and our understanding remains constrained. In this study, we used ChatGPT as a source of feedback for students' argumentative essay writing tasks and we compared the quality of ChatGPT-generated feedback with peer feedback. The participant pool consisted of 74 graduate students from a Dutch university. The study unfolded in two phases: firstly, students' essay data were collected as they composed essays on one of the given topics; subsequently, peer feedback and ChatGPT-generated feedback data were collected through engaging peers in a feedback process and using ChatGPT as a feedback source. Two coding schemes including coding schemes for essay analysis and coding schemes for feedback analysis were used to measure the quality of essays and feedback. Then, a MANOVA analysis was employed to determine any distinctions between the feedback generated by peers and ChatGPT. Additionally, Spearman's correlation was utilized to explore potential links between the essay quality and the feedback generated by peers and ChatGPT. The results showed a significant difference between feedback generated by ChatGPT and peers. While ChatGPT provided more descriptive feedback including information about how the essay is written, peers provided feedback including information about identification of the problem in the essay. The overarching look at the results suggests a potential complementary role for ChatGPT and students in the feedback process. Regarding the relationship between the quality of essays and the quality of the feedback provided by ChatGPT and peers, we found no overall significant relationship. These findings imply that the quality of the essays does not impact both ChatGPT and peer feedback quality. The implications of this study are valuable, shedding light on the prospective use of ChatGPT as a feedback source, particularly for complex tasks like argumentative essay writing. We discussed the findings and delved into the implications for future research and practical applications in educational contexts.
Abstract Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants' agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3's output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.
1. AI, Popular Culture, Semiotics -- 2. AI-Generated Literature -- 3. AI-Generated Cinema -- 4. AI-Generated Music -- 5. AI in the Mass Media -- 6. AI-Generated Art -- 7. AI in Marketing and Advertising -- 8. AI and Gaming Culture -- 9. Simulacrum Culture -- 10. AI and the Future of Popular Culture.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Abstract With the advent of large language models (LLMs) that can generate qualified content, such as images based on text prompts, the threat posed by AI creativity has become imminent. Courts and institutions in the US and China have provided different answers with regard to the copyrightability of prompt-based images. This article argues that the judicial recognition of text-to-image copyrightability at the current stage is dangerous. Not only is the practice not in accordance with our traditional understanding of originality and the author-work relationship; it also fails to consider the collateral consequences of such a finding. Perfectly regulating AI-generated content (AIGC) may be beyond the judiciary's capacity, as the solutions are provided within an ill-suited framework. It would be preferable for legislators to engage in thorough discussions with stakeholders to develop a considered regulatory plan first, which does not necessarily have to revolve around copyright.
Creative artificial intelligence (AI) has received a lot of attention in recent years. Artworks that are introduced to be generated by AI (rather than a human artist) are, however, often evaluated negatively. Integrating extant research, we suggest that AI is ascribed less mind (i.e. agency and experience) which is responsible for this effect. In two experiments ( N = 176 and N = 381) we observed negative indirect effects of artist information (AI vs human artist) on the appreciation of visual artworks. The AI is consistently ascribed less agency and less experience than a human artist. Higher levels of experience and agency ascribed to an artist are, in turn, associated with higher appreciation of a piece of art. In both experiments the total effect of artist information on appreciation was not significant. Artist information did not predict whether the artwork deviated positively from viewers' expectations developed before the actual artwork was encountered.