Abstract As public confidence in polling has been waning in the wake of recent elections (Narea, N. 2016. After 2016, Can we Ever Trust the Polls Again? New Republic), many researchers have been seeking to diagnose the shortcomings in these data (Gelman, A., and J. Azari. 2017. "19 Things We Learned from the 2016 Election." Statistics and Public Policy 4 (1): 1–10; Kennedy, C., M. Blumenthal, S. Clement, J. D. Clinton, C. Durand, C. Franklin, K. McGeeney, L. Miringoff, K. Olson, D. Rivers, L. Saad, G. E. Witt, and C. Wlezien. 2018. "An Evaluation of the 2016 Election Polls in the United States." Public Opinion Quarterly 82 (1): 1–33; Mercer, A., C. Deane, and K. McGeeney. 2016. Why 2016 Election Polls Missed Their Mark. Pew Research Center. Also available at https://www.pewresearch.org/fact-tank/2016/11/09/why-2016-election-polls-missed-their-mark/). One conjecture stems from observed differences between polling results based on the methodological choice between live and non-live modes of survey administration (Enten, H. 2015. The Future of Polling May Depend on Donald Trumps Fate. FiveThirtyEight. Also available at https://fivethirtyeight.com/features/the-future-of-polling-may-depend-on-donald-trumps-fate/). While it has become commonplace to discuss "mode effect" on surveys, it reemerged in the political zeitgeist as the "Shy Trump" supporter hypothesis leading up to the 2016 U.S. Presidential Election (Edsall, T. B. 2016. How Many People Support Trump but Dont want to Admit it. New York Times). Motivated by the conflicting evidence for (Enns, P. K., J. Lagodny, and J. P. Schuldt. 2017. "Understanding the 2016 US Presidential Polls: The Importance of Hidden Trump Supporters." Statistics, Politics, and Policy 8 (1): 41–63) and against (Coppock, A. 2017. "Did Shy Trump Supporters Bias the 2016 Polls? Evidence from a Nationally-Representative List Experiment." Statistics, Politics, and Policy 8 (1): 29–40) this hypothesis, we built a complex statistical model that pools together results across multiple pollsters and throughout the election cycle while accounting for the nuances of these data. Specifically, we explored election data for the presence of mode effect using time series with a general additive mixed model (GAMM). We estimated mode effect at state and national levels to perform statistical mode adjustments, which we then compared to observed election results. In this paper, we utilized polling results from the United States Presidential Elections in 2016 (4208 polls) and 2020 (4133 polls). Using these data, we identified spatial trends and areas where mode effect was statistically significant at a 0.05 level. In summary, we make three contributions to the literature on mode effect adjustment in the poll aggregation setting. First, we present a straightforward and flexible statistical approach to estimating mode effect using time series data. In doing so, we help to bridge the gap between theory-focused statistical work and the social sciences. Second, we apply this method to two recent presidential elections, providing insight into the significance of mode effect. Third, we provide evidence for spatial mode effect trends suggesting regional voting behaviors that future scholars can explore.
AbstractProduct labels communicate the presence of valuable traits to consumers but, when a single label represents multiple traits, communication can be hindered by consumer misinformation. Redundant labels can address this problem by explicitly indicating included qualities within a comprehensive label. We use data from a field experiment on willingness to pay for redundant labels in the US organic fluid milk market when consumers are either uninformed or informed of the redundancy. Market share simulations demonstrate the market impacts and effectiveness of introducing a redundant label as a response strategy to recapture market share lost to increasingly prevalent individual labels.