Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign
In: Political communication: an international journal, Band 37, Heft 2, S. 194-214
ISSN: 1091-7675
3 Ergebnisse
Sortierung:
In: Political communication: an international journal, Band 37, Heft 2, S. 194-214
ISSN: 1091-7675
In: ACM transactions on social computing, Band 3, Heft 1, S. 1-20
ISSN: 2469-7826
Studying cultural variation in recollections of sociopolitical events is crucial for achieving diverse understandings of such events. To date, most studies in this area have focused on analyzing variation in texts describing events. Here, we analyze variation in image usage across Wikipedia language editions to understand if, like text, visual narratives reflect distinct perspectives in articles about culturally-tethered events. We focus on articles about coup d'états as an example of highly contextual sociopolitical events likely to display such variation. The key challenge to examining variation in images is that there is no existing framework to use as a basis for comparison. To address this challenge, we use an iterative inductive coding process to arrive at a 46-item typology for categorizing the content of images relating to contested sociopolitical events, and a typology of network motifs that characterizes structural patterns of image use. We apply these typologies in a large-scale quantitative analysis that establishes clusters of image themes, two detailed qualitative case studies comparing Wikipedia articles on coup d'états in Soviet Russia and Egypt, and four quantitative analyses clustering image themes by language usage at the article level. These analyses document variation in imagery around particular events and variation in tendencies across cultures. We find substantial cultural variation in both content and network structure. This study presents a novel methodological framework for uncovering culturally divergent perspective of political crises through imagery on Wikipedia.
Motivated by the extensive documented disparate harms of artificial intelligence (AI), many recent practitioner-facing reflective tools have been created to promote responsible AI development. However, the use of such tools internally by technology development firms addresses responsible AI as an issue of closed-door compliance rather than a matter of public concern. Recent advocate and activist efforts intervene in AI as a public policy problem, inciting a growing number of cities to pass bans or other ordinances on AI and surveillance technologies. In support of this broader ecology of political actors, we present a set of reflective tools intended to increase public participation in technology advocacy for AI policy action. To this end, the Algorithmic Equity Toolkit (the AEKit) provides a practical policy-facing definition of AI, a flowchart for assessing technologies against that definition, a worksheet for decomposing AI systems into constituent parts, and a list of probing questions that can be posed to vendors, policy-makers, or government agencies. The AEKit carries an action-orientation towards political encounters between community groups in the public and their representatives, opening up the work of AI reflection and remediation to multiple points of intervention. Unlike current reflective tools available to practitioners, our toolkit carries with it a politics of community participation and activism.
BASE