Sünje Paasch-Colberg untersucht den Stellenwert medialer Thematisierungseffekte für die Wahlentscheidung und verschränkt kommunikationswissenschaftliche Medienwirkungs- und politikwissenschaftliche Wahlforschung stärker miteinander als bisher. Ansatzpunkte sind der Agenda Setting- und Priming-Ansatz sowie die großen theoretischen Strömungen der Wahlforschung (soziologische Ansätze, sozialpsychologisches Modell, Rational Choice-Perspektive). Ergebnis der theoretischen Arbeit ist ein interdisziplinäres Modell der Wahlentscheidung, das in einer Sekundäranalyse zur Bundestagswahl 2009 empirisch geprüft wird. Der Inhalt Themen, Massenmedien, Wahlen Mediale Thematisierungseffekte: Agenda Setting und Priming Wahlentscheidung und politische Themen Die Zielgruppen Dozierende und Studierende der Kommunikations- und Politikwissenschaften JournalistInnen und PraktikerInnen aus dem Politikbereich sowie an Wahlen interessierte BürgerInnen Die Autorin Sünje Paasch-Colberg ist wissenschaftliche Mitarbeiterin an der Arbeitsstelle Medienanalyse/Forschungsmethoden des Instituts für Publizistik- und Kommunikationswissenschaft der Freien Universität Berlin. Ihre Forschungsschwerpunkte sind politische Kommunikation und Medienwirkungsforschung, Massenmedien und soziale Integration, Methodenentwicklung
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Viele Redaktionen haben in den vergangenen Jahren Maßnahmen ergriffen, um ausfallenden Nutzerbeiträgen, Beleidigungen und Hassrede in den Kommentarspalten und Diskussionsforen ihrer Internetseiten zu begegnen. Neben der Formulierung von Community-Richtlinien, manueller Kommentarmoderation und dem Einsatz von Monitoringsoftware kommt dabei der "Diskursarchitektur", der technischen Ausgestaltung dieser Kommentarbereiche, eine große Bedeutung zu. Dazu zählen etwa verschiedene Formen der Registrierung, die Sortierung der Kommentarthreads oder verschiedene Grade der Anonymisierung. Die bisherige Forschung zu dem Thema hat solche Diskursarchitekturen zumeist in Fallstudien vergleichend untersucht, um möglichen Effekten der technischen Umgebung auf das Kommentarverhalten nachzuspüren. Die einzelnen Bestandteile von Diskursarchitekturen wurden dabei in der Regel analytisch nicht differenziert. Dieser Lücke widmet sich der vorliegende Beitrag und präsentiert eine Studie, in der alle von der IVW ausgewiesenen 361 redaktionell betreuten deutschen Nachrichtenseiten differenziert auf zehn verschiedene Merkmale hin analysiert wurden. Dabei zeigt sich, dass jene 173 Nachrichtenseiten, die überhaupt Kommentarspalten anbieten, ihre Möglichkeiten zur technischen Regulierung bei Weitem nicht ausschöpfen. Mit Hilfe einer hierarchischen Clusteranalyse wurden schließlich fünf distinkte Typen von Diskursarchitekturen in Kommentarspalten identifiziert, die in zukünftigen Studien zur Klassifizierung genutzt werden können. ; For some years now, news sites around the world are increasingly confronted with abusive user comments in their respective comment sections and discussion forums. While these spaces were long seen as promising instruments of democratic participation, they now have a reputation as spaces full of insults and hate speech. Since this not only poses a threat to social cohesion but can also compromise the image of a news site, many platforms have taken measures to regulate the comments on their sites since then. Some have published community guidelines, hired moderation teams and implemented monitoring software. As an additional measure, many adapted the technological design and the features of their comment spaces to gain more control over the posted comments. This includes, for example, requiring commenters to register with the site, sorting of comment threads and various degrees of anonymization. Many authors refer to this technological design of comment spaces as "discourse architecture." The theoretical argument behind this term is that the way comment spaces are "built" influences how commenters behave within them. This perspective is particularly interesting from the point of view of journalism research, since the relationship between editorial staff and audience is manifested in such technological architectures. Several studies have analyzed and compared various discourse architectures in order to investigate possible effects on commenting behavior. However, there is still a lack of a systematic analysis in this field. Apart from individual case studies, there are no findings on the diversity of discourse architectures which provide information on the technical conditions of audience participation on the Internet. On the theoretical basis of the discourse architecture approach, this study investigates two research questions: How are the included discourse architectures designed (RQ1)? And what types of discourse architectures can we identify (RQ2)? In order to answer these questions, we conducted a standardized analysis of 361 German news sites, which produced three key findings. Firstly, with regard to RQ1, we found that 173 of these 361 news sites offer comments sections, whereas only 24 offer discussion forums. In contrast, almost all sites in the sample have an additional Facebook page. Al-though we have not checked whether these pages actually contain posts and comments, against this back-ground we can nevertheless assume that the discourse architecture of Facebook has become the most important technological infrastructure for commenting news articles in Germany. Acknowledging the low deliberative quality of user discussions on Facebook revealed by earlier studies, this would be quite problematic with regard to social integration. Secondly, the detailed analysis of the comment sections showed that most news sites do not exhaust the possibilities of using technical discourse architectures to gain more control over the discussions of users and users. Overall, the technological design of the comment sections is quite inclusive, not very restrictive and only weakly regulated. The most popular features are required registration, rating of comments, opprtunities to report comments and the restriction of comment sections to certain topics. Thirdly, with regard to RQ2, five distinct types of discourse architectures for comment sections could be identified within the sample. They differ in terms of their combinations of features and as well as in terms of their outreach. Additionally, we found a significant correlation between the outreach of the news sites and the number of features that strengthen editorial control over the comments.
Archiviert sind hier die Methodendokumentation und das Codebuch einer Analyse der Diskursarchitekturen von 361 deutschen Nachrichtenseiten (Methodendokumentation.pdf, Codeplan.pdf), der aufbereitete und bereinigte Datensatz mit allen Codierungen, Anmerkungen und den IVW-Daten der untersuchten Internetseiten (Datensatz_final.sav, Datensatz_final.csv, Datensatz_final_label.csv), eine SPSS-Syntax zur Dokumentation dieser Datenaufbereitung und -bereinigung (Datenaufbereitung.sps) sowie eine weitere SPSS-Syntax zur Dokumentation und Replikation der Datenauswertung auf Grundlage des hier verfügbar gemachten Datensatzes (Datenanalyse.sps). Die Ergebnisse der Auswertung sind in der unten genannten Publikation dokumentiert worden.
This book is the result of a conference that could not take place. It is a collection of 26 texts that address and discuss the latest developments in international hate speech research from a wide range of disciplinary perspectives. This includes case studies from Brazil, Lebanon, Poland, Nigeria, and India, theoretical introductions to the concepts of hate speech, dangerous speech, incivility, toxicity, extreme speech, and dark participation, as well as reflections on methodological challenges such as scraping, annotation, datafication, implicity, explainability, and machine learning. As such, it provides a much-needed forum for cross-national and cross-disciplinary conversations in what is currently a very vibrant field of research.
This book is the result of a conference that could not take place. It is a collection of 26 texts that address and discuss the latest developments in international hate speech research from a wide range of disciplinary perspectives. This includes case studies from Brazil, Lebanon, Poland, Nigeria, and India, theoretical introductions to the concepts of hate speech, dangerous speech, incivility, toxicity, extreme speech, and dark participation, as well as reflections on methodological challenges such as scraping, annotation, datafication, implicity, explainability, and machine learning. As such, it provides a much-needed forum for cross-national and cross-disciplinary conversations in what is currently a very vibrant field of research.
In recent debates on offensive language in participatory online spaces, the term 'hate speech' has become especially prominent. Originating from a legal context, the term usually refers to violent threats or expressions of prejudice against particular groups on the basis of race, religion, or sexual orientation. However, due to its explicit reference to the emotion of hate, it is also used more colloquially as a general label for any kind of negative expression. This ambiguity leads to misunderstandings in discussions about hate speech and challenges its identification. To meet this challenge, this article provides a modularized framework to differentiate various forms of hate speech and offensive language. On the basis of this framework, we present a text annotation study of 5,031 user comments on the topic of immigration and refuge posted in March 2019 on three German news sites, four Facebook pages, 13 YouTube channels, and one right-wing blog. An in-depth analysis of these comments identifies various types of hate speech and offensive language targeting immigrants and refugees. By exploring typical combinations of labeled attributes, we empirically map the variety of offensive language in the subject area ranging from insults to calls for hate crimes, going beyond the common 'hate/no-hate' dichotomy found in similar studies. The results are discussed with a focus on the grey area between hate speech and offensive language.
Hate speech - communication that attacks a person or a group on the basis of identity factors, such as gender, race, or religion - is one of the main digital threats to democracy. Hate speech has manifold, empirically evidenced consequences for targeted individuals and groups experiencing systematic discrimination and for social cohesion as a whole. Yet, while the upheaval of social media has put the concept in the spotlight, such attention has also structurally transformed its meaning, turning hate speech from a concept with clear defining properties into a family resemblance comprising all kinds of online abuse. This process is far from causing only academic issues. It also sidesteps historical oppression as a defining property and as the reason for which one is targeted by hate speech. Thus, the process has been belittling public animosity against historically oppressed groups, reducing hate speech merely to a matter of offensive language on social media. This chapter shows how and why this conceptual change has taken place and the consequences it unleashes. It specifically addresses the problems of concept stretching, concept shrinking, and the inflation of concepts. Finally, it concludes that such conceptual issues jeopardize the potential that digital media research on hate speech has to provide guidance to a broad range of social actors.
Building on the growing concerns around hate speech and harmful content on social media, this chapter analyzes the processes by which content is moderated on leading social media platforms. The outsourcing practices of platform operators or social media companies to acquire content moderation services from third-party companies have been acknowledged in the public discourse. Details regarding these outsourcing relationships and power mechanisms remain obfuscated, however. Using empirical data from India, this chapter presents a global value chain perspective on the mechanisms by which US-based social media monopolies source content moderation services from Indian information technology business process outsourcing (IT BPO) supplier companies. The agreements established between the two parties direct the content moderation labor process through which Indian workers' labor power is transformed into productive labor.
As long as we have attempted to sanction untoward speech, others have devised strategies for expressing themselves while dodging such sanctions. In this intervention, I review the arms race between technological filters designed to curb hate speech, and evasive language practices designed to avoid detection by these filters. I argue that, following important advances in the detection of relatively overt uses of hate speech, further advances will need to address hate speech that relies on culturally or situationally available context knowledge and linguistic ambiguities to convey its intended offenses. Resolving such forms of hate speech not only poses increasingly unreasonable demands on available data and technologies, but does so for limited, uncertain gains, as many evasive uses of language effectively defy unique valid classification.