Search results
Filter
31 results
Sort by:
Resilience to Online Censorship
In: Annual review of political science, Volume 23, Issue 1, p. 401-419
ISSN: 1545-1577
To what extent are Internet users resilient to online censorship? When does censorship influence consumption of information and when does it create backlash? Drawing on a growing literature on Internet users' reactions to censorship, I posit that awareness of censorship, incentives to seek out information, and resources to circumvent censorship are essential to resilience to censorship. I describe how authoritarian regimes have adapted their strategies of censorship to reduce both awareness of censorship and demand for uncensored information.
What is Political Methodology?
In: PS: political science & politics, Volume 51, Issue 3, p. 597-601
ISSN: 1537-5935
Introduction to the Virtual Issue: Recent Innovations in Text Analysis for Social Science
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Volume 24, Issue V10, p. 1-5
ISSN: 1476-4989
The authoritarian data problem
In: Journal of democracy, Volume 34, Issue 4, p. 141-150
ISSN: 1086-3214
World Affairs Online
How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Volume 23, Issue 2, p. 159-179
ISSN: 1476-4989
"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article is software that implements the methods we describe.
Acquiescence Bias Inflates Estimates of Conspiratorial Beliefs and Political Misperceptions
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Volume 31, Issue 4, p. 575-590
ISSN: 1476-4989
AbstractScholars, pundits, and politicians use opinion surveys to study citizen beliefs about political facts, such as the current unemployment rate, and more conspiratorial beliefs, such as whether Barack Obama was born abroad. Many studies, however, ignore acquiescence-response bias, the tendency for survey respondents to endorse any assertion made in a survey question regardless of content. With new surveys fielding questions asked in recent scholarship, we show that acquiescence bias inflates estimated incidence of conspiratorial beliefs and political misperceptions in the United States and China by up to 50%. Acquiescence bias is disproportionately prevalent among more ideological respondents, inflating correlations between political ideology such as conservatism and endorsement of conspiracies or misperception of facts. We propose and demonstrate two methods to correct for acquiescence bias.
How Sudden Censorship Can Increase Access to Information
In: American political science review, Volume 112, Issue 3, p. 621-636
ISSN: 1537-5943
Conventional wisdom assumes that increased censorship will strictly decrease access to information. We delineate circumstances when increases in censorship expand access to information for a substantial subset of the population. When governments suddenly impose censorship on previously uncensored information, citizens accustomed to acquiring this information will be incentivized to learn methods of censorship evasion. These evasion tools provide continued access to the newly blocked information—and also extend users' ability to access information that has long been censored. We illustrate this phenomenon using millions of individual-level actions of social media users in China before and after the block of Instagram. We show that the block inspired millions of Chinese users to acquire virtual private networks, and that these users subsequently joined censored websites like Twitter and Facebook. Despite initially being apolitical, these new users began browsing blocked political pages on Wikipedia, following Chinese political activists on Twitter, and discussing highly politicized topics such as opposition protests in Hong Kong.
How sudden censorship can increase access to information
In: American political science review, Volume 112, Issue 3, p. 621-636
ISSN: 0003-0554
World Affairs Online
How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument
In: American political science review, Volume 111, Issue 3, p. 484-501
ISSN: 1537-5943
The Chinese government has long been suspected of hiring as many as 2 million people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called 50c party posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of most posts openly accused on social media of being 50c. Yet almost no systematic empirical evidence exists for this claim or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large-scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program and suggest how they may change our broader theoretical understanding of "common knowledge" and information control in authoritarian regimes.
Computer‐Assisted Keyword and Document Set Discovery from Unstructured Text
In: American journal of political science, Volume 61, Issue 4, p. 971-988
ISSN: 1540-5907
AbstractThe (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend on this choice, researchers usually pick keywords in ad hoc ways that are far from optimal and usually biased. Most seem to think that keyword selection is easy, since they do Google searches every day, but we demonstrate that humans perform exceedingly poorly at this basic task. We offer a better approach, one that also can help with following conversations where participants rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look‐alike modeling; industry and intelligence analysis; and sentiment and topic analysis. We develop a computer‐assisted (as opposed to fully automated or human‐only) statistical approach that suggests keywords from available text without needing structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and summarizing results with easy‐to‐understand Boolean search strings. We illustrate how the technique works with analyses of English texts about the Boston Marathon bombings, Chinese social media posts designed to evade censorship, and others.
How the Chinese government fabricates social media posts for strategic distraction, not engaged argument
In: American political science review, Volume 111, Issue 3, p. 484-501
ISSN: 0003-0554
World Affairs Online
How Censorship in China Allows Government Criticism but Silences Collective Expression
In: American political science review, Volume 107, Issue 2, p. 326-343
ISSN: 1537-5943
We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent.
How censorship in China allows government criticism but silences collective expression
In: American political science review, Volume 107, Issue 2, p. 326-343
ISSN: 0003-0554
World Affairs Online
SSRN