Suchergebnisse
Filter
12 Ergebnisse
Sortierung:
Algorithmic indifference: The dearth of news recommendations on TikTok
In: New Media & Society
ISSN: 1461-7315
The role of recommendation systems in news consumption has been hotly contested. From one perspective, the combination of personalized recommendations and practically limitless content diminishes news consumption, as people turn to more entertaining fare. From another, algorithmic systems and social networks heighten incidental exposure, raising opportunities for news consumption regardless of explicit individual interest. In this work, we examine the potential for algorithmic exposure to news on TikTok, a massively popular social network built around short-form video. In the context of US-based news audiences, we examine the accounts TikTok recommends, the videos it shows new users, and its trending hashtags. We find almost no evidence of proactive news exposure on TikTok's behalf. We also find that, while TikTok's algorithms respond slightly to active signals of news interest from simulated users, that response does not lead to increased exposure to credible news content. These findings highlight a lack of algorithmic news distribution on TikTok.
Predicting COVID: Understanding audience responses to predictive journalism via online comments
In: New Media & Society, S. 146144482211356
ISSN: 1461-7315
The COVID-19 pandemic triggered a global health crisis that stimulated journalists to frame their stories around predictive models and forecasts aiming to predict the future trend of the pandemic. This article examines the audience response to predictive journalism by qualitatively analyzing readers' comments to articles covering COVID that were published in a small sample of mainstream media. Based on a thematic analysis of readers' comments, this research contributes a typology of audience response types to the models incorporated in such predictive journalism. We elaborate on each of three primary themes—reflecting affective, action-oriented, and evaluative responses—and discuss the implications of our findings and the importance of expanding research to answer questions related to the role of predictive journalism in shaping affective response, encouraging action-oriented responses and collective planning around responsibility for taking future actions, and considering the ways in which supportive and critical comments triggered by the models may be harnessed to improve communication.
Highlighting High-quality Content as a Moderation Strategy: The Role ofNew York TimesPicks in Comment Quality and Engagement
In: ACM transactions on social computing, Band 4, Heft 4, S. 1-24
ISSN: 2469-7826
News commenting is a prevalent form of online interaction, but it is fraught with issues, such as a low quality of discussion that often takes place. While various moderation methods can be used to maintain online discussion quality, one moderation strategy that is underexplored is for professional moderators to mark high-quality posts that are further highlighted in the interface. In this work, we look at the impact ofNew York Times(NYT) Picks. We present an analysis of more than 13 million NYT comments, examining the quality and frequency of commenting on the site in response to NYT Picks. The findings offer evidence that NYT Picks are associated with an increase in the quality of first-time receivers' next approved comment, as well as the commenting frequency during commenters' early tenure on the site. The quality boost associated with receiving a Pick attenuates after subsequent picks and diminishes over time as the user continues commenting but is still higher than commenters who do not receive Picks. Visible comment quality has a relatively small but significant positive correlation with the quality of the next comment, and exposure to Pick badges is also positively correlated with subsequent higher-quality approved comments, albeit to a lesser extent. Our results underscore the potential for news organizations to adopt the moderation strategy of highlighting professionally selected high-quality comments to improve overall community quality. We discuss the implications of our findings and offer design opportunities for comment sections that could further enhance quality in online discourse.
Anticipating and addressing the ethical implications of deepfakes in the context of elections
In: New media & society: an international and interdisciplinary forum for the examination of the social dynamics of media and information change, Band 23, Heft 7, S. 2072-2098
ISSN: 1461-7315
New media synthesis technologies are rapidly advancing and becoming more accessible, allowing users to make video and audio clips (i.e. deepfakes) of individuals doing and saying things they never did or said. Deepfakes have significant implications for the integrity of many social domains including that of elections. Focusing on the 2020 US presidential election and using an anticipatory approach, this article examines the ethical issues raised by deepfakes and discusses strategies for addressing these issues. Eight hypothetical scenarios are developed and used as the basis for this analysis, which identifies harms to voters who view deepfakes, candidates and campaigns that are the subjects of deepfakes, and threats to electoral integrity. Four potential forms of intervention are discussed with respect to multi-stakeholder responsibility for addressing harms, including education and media literacy, subject defense, verification, and publicity moderation.
Optimizing Content with A/B Headline Testing: Changing Newsroom Practices
In: Media and Communication, Band 7, Heft 1, S. 117-127
Audience analytics are an increasingly essential part of the modern newsroom as publishers seek to maximize the reach and commercial potential of their content. On top of a wealth of audience data collected, algorithmic approaches can then be applied with an eye towards predicting and optimizing the performance of content based on historical patterns. This work focuses specifically on content optimization practices surrounding the use of A/B headline testing in newsrooms. Using such approaches, digital newsrooms might audience-test as many as a dozen headlines per article, collecting data that allows an optimization algorithm to converge on the headline that is best with respect to some metric, such as the click-through rate. This article presents the results of an interview study which illuminate the ways in which A/B testing algorithms are changing workflow and headline writing practices, as well as the social dynamics shaping this process and its implementation within US newsrooms.
Negotiated Autonomy: The Role of Social Media Algorithms in Editorial Decision Making
In: Media and Communication, Band 8, Heft 3, S. 27-38
Social media platforms have increasingly become an important way for news organizations to distribute content to their audiences. As news organizations relinquish control over distribution, they may feel the need to optimize their content to align with platform logics to ensure economic sustainability. However, the opaque and often proprietary nature of platform algorithms makes it hard for news organizations to truly know what kinds of content are preferred and will perform well. Invoking the concept of algorithmic 'folk theories,' this article presents a study of in-depth, semi-structured interviews with 18 U.S.-based news journalists and editors to understand how they make sense of social media algorithms, and to what extent this influences editorial decision making. Our findings suggest that while journalists' understandings of platform algorithms create new considerations for gatekeeping practices, the extent to which it influences those practices is often negotiated against traditional journalistic conceptions of newsworthiness and journalistic autonomy.
Anticipating impacts: using large-scale scenario-writing to explore diverse implications of generative AI in the news environment
In: AI and ethics
ISSN: 2730-5961
AbstractThe tremendous rise of generative AI has reached every part of society—including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still in its infancy and mostly limited to the views of technology developers and/or researchers. In this paper, we aim to broaden the perspective and capture the expectations of three stakeholder groups (news consumers; technology developers; content creators) about the potential negative impacts of generative AI, as well as mitigation strategies to address these. Methodologically, we apply scenario-writing and use participatory foresight in the context of a survey (n = 119) to delve into cognitively diverse imaginations of the future. We qualitatively analyze the scenarios using thematic analysis to systematically map potential impacts of generative AI on the news environment, potential mitigation strategies, and the role of stakeholders in causing and mitigating these impacts. In addition, we measure respondents' opinions on a specific mitigation strategy, namely transparency obligations as suggested in Article 52 of the draft EU AI Act. We compare the results across different stakeholder groups and elaborate on different expected impacts across these groups. We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
Data-driven storytelling
In: A K Peters visualization series
Artificial Intelligence and Journalism
In: Journalism & mass communication quarterly: JMCQ, Band 96, Heft 3, S. 673-695
ISSN: 2161-430X
Countering Algorithmic Bias and Disinformation and Effectively Harnessing the Power of AI in Media
In: Journalism & mass communication quarterly: JMCQ, Band 99, Heft 4, S. 887-907
ISSN: 2161-430X
Responsible media technology and AI: challenges and research directions
In: AI and ethics, Band 2, Heft 4, S. 585-594
ISSN: 2730-5961
AbstractThe last two decades have witnessed major disruptions to the traditional media industry as a result of technological breakthroughs. New opportunities and challenges continue to arise, most recently as a result of the rapid advance and adoption of artificial intelligence technologies. On the one hand, the broad adoption of these technologies may introduce new opportunities for diversifying media offerings, fighting disinformation, and advancing data-driven journalism. On the other hand, techniques such as algorithmic content selection and user personalization can introduce risks and societal threats. The challenge of balancing these opportunities and benefits against their potential for negative impacts underscores the need for more research in responsible media technology. In this paper, we first describe the major challenges—both for societies and the media industry—that come with modern media technology. We then outline various places in the media production and dissemination chain, where research gaps exist, where better technical approaches are needed, and where technology must be designed in a way that can effectively support responsible editorial processes and principles. We argue that a comprehensive approach to research in responsible media technology, leveraging an interdisciplinary approach and a close cooperation between the media industry and academic institutions, is urgently needed.