Raising the Floor or Closing the Gap? How Media Choice and Media Content Impact Political Knowledge
In: Political communication: an international journal, Volume 37, Issue 5, p. 719-740
ISSN: 1091-7675
36 results
Sort by:
In: Political communication: an international journal, Volume 37, Issue 5, p. 719-740
ISSN: 1091-7675
In: The public opinion quarterly: POQ, Volume 83, Issue S1, p. 280-288
ISSN: 1537-5331
AbstractRising rates of nonresponse are one of the most-debated issues in contemporary survey research. While early survey research regularly achieved response rates close to 100 percent, contemporary telephone interviewing methods in the United States regularly obtain response rates below 10 percent, due to a mix of noncontact and refusals. Existing research has examined a number of factors that explain variation in response rates, yet almost all such work considers survey response as an isolated, independent event. This note aims to stimulate debate by suggesting that a paradigm shift in theorizing nonresponse is needed. I diagnose the problem of nonresponse not only as an individual-level, survey-specific phenomenon, but as something larger and more collective: namely, as a common pool resource (CPR) problem. Because researchers acting independently might each seek to maximize their response rate and achieve intended sample sizes, the common pool resource of human respondents can be prone to overextraction. In addition to thinking about "benefit-cost" explanations for why respondents might respond to a specific survey, considering responses as a shared resource focuses attention on cross-level theory on how the survey field might collectively govern responses from human populations. Rather than testing CPR theory directly, I instead describe why thinking of nonresponse as a CPR problem may be useful, use the United States as a case study to demonstrate the possible scale of response extraction, and leverage findings from CPR studies to suggest directions for future research into nonresponse.
In: Oxford Research Encyclopedia of Politics
"The Search for Real-World Media Effects on Political Decision Making" published on by Oxford University Press.
In: PS: political science & politics, Volume 51, Issue 3, p. 602-606
ISSN: 1537-5935
In: Journal of experimental political science: JEPS, Volume 4, Issue 1, p. 21-33
ISSN: 2052-2649
AbstractEcological validity is vital to experimental research because designs that are too artificial may not speak to any real-world political phenomenon. One such concern is treatment of self-selection: if individuals in the real-world self-select treatments, such as political communications, how well does the sample average treatment effect estimate the effects of message exposure for those individuals who would—if given the choice—opt-in to and out of receiving treatment? This study shows that randomization masks effect heterogeneity between individuals who would select different messages if given the choice. Yet, such selections are themselves complex, revealing additional challenges for realistically studying treatments prone to self-selection. The evidence of effect heterogeneity raises questions about the appropriateness of random assignment experiments for studying political communication and the results more broadly advance our understanding of citizens' selection into and responses to communications when, as they often do, have choice over what messages to receive.
In: The public opinion quarterly: POQ, Volume 80, Issue 3, p. 796-799
ISSN: 1537-5331
While there are now only six weeks remaining before the 2016 presidential election, the election campaign has been underway for more than 18 months for some candidates. Thomas Leeper writes that despite this long-lead time, voters do not seem particularly interested in the election, with many put off by its negativity. He argues that the day-to-day activities of campaigns (including gaffes) often do little to shift the needle towards one candidate or another; this campaign is little different, with Hillary Clinton having been the favored candidate to win since the beginning.
BASE
In: The public opinion quarterly: POQ, Volume 78, Issue 4, p. 974-983
ISSN: 1537-5331
In: The public opinion quarterly: POQ, Volume 78, Issue 1, p. 27-46
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Volume 78, Issue 1, p. 27-26
ISSN: 0033-362X
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Volume 78, Issue 4, p. 974-983
ISSN: 0033-362X
Political scientists rely heavily on survey research to gain insights into public attitudes and behaviors. Over the past decade, survey data collection has moved away from personal face-to-face and telephone interviewing towards a model of computer-assisted self-interviewing. A hallmark of many online surveys is the prominent display of the survey's sponsor, most often an academic institution, in the initial consent form and/or on the survey website itself. It is an open question whether these displays of academic survey sponsorship could increase total survey error. We measure the extent to which sponsorship (by a university or marketing firm) affects data quality, including satisficing behavior, demand characteristics, and socially desirable responding. In addition, we examine whether sponsor effects vary depending on the participant's experience with online surveys. Overall, we find no evidence that response quality is affected by survey sponsor or by past survey experience.
BASE
In: Leeper , T J & Robison , J 2020 , ' More Important, but for What Exactly? The Insignificant Role of Subjective Issue Importance in Vote Decisions ' , Political Behavior , vol. 42 , pp. 239-259 . https://doi.org/10.1007/s11109-018-9494-0
The nature of democratic governance is intimately connected with how citizens respond to candidate position taking. But when will a generally uninformed public base its vote choices on candidate positions? Since Converse scholars have argued that citizens should place greater weight on candidate positions on issues they consider personally important. However, this claim has received mixed empirical support. We revisit this question with compelling new evidence. First, we expand the limited temporal focus of existing work in our first study where we analyze all available ANES data on importance and issue voting between 1980 and 2008. We then overcome endogeneity concerns through a nationally representative conjoint experiment in which we randomize two candidate's positions on five issues. Results from both studies demonstrate that there is scant evidence that subjective issue importance consistently moderates the relationship between candidate positions and vote choices. We discuss the implications of these results for "issue public" theories of political engagement, for research on voting behavior, and for political representation.
BASE
In: Journal of experimental political science: JEPS, Volume 7, Issue 3, p. 209-217
ISSN: 2052-2649
AbstractPolitical scientists rely heavily on survey research to gain insights into public attitudes and behaviors. Over the past decade, survey data collection has moved away from personal face-to-face and telephone interviewing towards a model of computer-assisted self-interviewing. A hallmark of many online surveys is the prominent display of the survey's sponsor, most often an academic institution, in the initial consent form and/or on the survey website itself. It is an open question whether these displays of academic survey sponsorship could increase total survey error. We measure the extent to which sponsorship (by a university or marketing firm) affects data quality, including satisficing behavior, demand characteristics, and socially desirable responding. In addition, we examine whether sponsor effects vary depending on the participant's experience with online surveys. Overall, we find no evidence that response quality is affected by survey sponsor or by past survey experience.
In: Political behavior, Volume 42, Issue 1, p. 239-259
ISSN: 1573-6687