In: New media & society: an international and interdisciplinary forum for the examination of the social dynamics of media and information change, Band 25, Heft 10, S. 2663-2683
This study explores voters' populist alternative news use during (different types of) democratic elections and investigates starting points for preventing potentially harmful effects. We draw from two combined data sets of web-tracking and survey data which were collected during the 2017 German Bundestag campaign (1523 participants) and the 2019 European Parliamentary election campaign in Germany (1009 participants). Results indicate that while populist alternative news outlets drew more interest during the first-order election campaign, they reached only 16.5% of users even then. Moreover, most users visited their websites rather seldom. Nonetheless, our data suggest that alternative news exposure is strongly linked to voting for (right-wing) populist parties. Regarding the origins of exposure, our analyses punctuate the role of platforms in referring users to populist alternative news. About 40% of website visits originated from Facebook alone in both data sets and another third of visits from search engines. This raises questions about algorithmic accountability.
Panel survey participation can bring about unintended changes in respondents' behavior and/or reporting of behavior. Using administrative data linked to a large panel survey, we analyze changes in respondents' labor market behavior. We estimate the causal effect of panel participation on the take-up of federal labor market programs using instrumental variables. Results show that panel survey participation leads to a decrease in respondents' take-up of these measures. These results suggest that panel survey participation not only affects the reporting of behavior, as previous studies have demonstrated, but can also alter respondents' actual behavior. ; Die wiederholte Teilnahme an Längsschnittstudien kann zu unbeabsichtigten Verhaltensänderungen und/oder Änderungen im Antwortverhalten der Teilnehmer führen. Um solchen Verhaltensänderungen nachzugehen, haben wir Umfragedaten der Längsschnittstudie PASS mit administrativen Daten verknüpft und schätzen mittels Instrumentenvariablen den kausalen Effekt der wiederholten Umfrageteilnahme auf die Teilnahme an Maßnahmen der aktiven Arbeitsmarktpolitik. Die Ergebnisse deuten darauf hin, dass Umfrageteilnehmer aufgrund der (mehrmaligen) Teilnahme an der Befragung an weniger Maßnahmen der aktiven Arbeitsmarktpolitik teilnehmen. Diese Resultate verdeutlichen, dass die wiederholte Teilnahme an Längsschnittbefragungen sich nicht nur auf das Antwortverhalten der Teilnehmer auswirken kann, sondern auch auf deren tatsächliches Verhalten.
Abstract Online panel surveys are often criticized for their inability to cover the offline population, potentially resulting in coverage error. Previous research has demonstrated that non-internet users in fact differ from online individuals on several sociodemographic characteristics. In attempts to reduce coverage error due to missing the offline population, several probability-based online panels equip offline households with an internet connection and a simple computer or tablet. However, the question remains whether the recruitment of offline individuals for an online panel leads to substantial changes in survey estimates. That is, it is unclear whether estimates derived from the survey data are affected by the differences between the groups of online and offline individuals. Against this background, we investigate how the inclusion of the previously offline population into the German Internet Panel affects various survey estimates such as voting behavior and social engagement. Overall, we find little evidence for the claim that equipping otherwise offline individuals with online access affects the estimates derived from previously online individuals only.
Many surveys aim to achieve high response rates to keep bias due to nonresponse low. However, research has shown that the relationship between the nonresponse rate and nonresponse bias is small. In fact, high response rates may lead to measurement error, if respondents with low response propensities provide survey responses of low quality. In this paper, we explore the relationship between response propensity and measurement error, specifically, motivated misreporting, the tendency to give inaccurate answers to speed through an interview. Using data from four surveys conducted in several countries and modes, we analyze whether motivated misreporting is worse among those respondents who were the least likely to respond to the survey. Contrary to the prediction of our theoretical model, we find only limited evidence that reluctant respondents are more likely to misreport.
AbstractMany surveys aim to achieve high response rates to keep bias due to nonresponse low. However, research has shown that the relationship between the nonresponse rate and nonresponse bias is small. In fact, high response rates may lead to measurement error, if respondents with low response propensities provide survey responses of low quality. In this paper, we explore the relationship between response propensity and measurement error, specifically, motivated misreporting, the tendency to give inaccurate answers to speed through an interview. Using data from four surveys conducted in several countries and modes, we analyze whether motivated misreporting is worse among those respondents who were the least likely to respond to the survey. Contrary to the prediction of our theoretical model, we find only limited evidence that reluctant respondents are more likely to misreport.
Abstract Statistical profiling of job seekers is an attractive option to guide the activities of public employment services. Many hope that algorithms will improve both efficiency and effectiveness of employment services' activities that are so far often based on human judgment. Against this backdrop, we evaluate regression and machine-learning models for predicting job-seekers' risk of becoming long-term unemployed using German administrative labor market data. While our models achieve competitive predictive performance, we show that training an accurate prediction model is just one element in a series of design and modeling decisions, each having notable effects that span beyond predictive accuracy. We observe considerable variation in the cases flagged as high risk across models, highlighting the need for systematic evaluation and transparency of the full prediction pipeline if statistical profiling techniques are to be implemented by employment agencies.
Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents' desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys.
Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias.
AbstractThe COVID-19 pandemic has spotlighted the importance of high-quality data for empirical health research and evidence-based political decision-making. To leverage the full potential of these data, a better understanding of the determinants and conditions under which people are willing to share their health data is critical. Building on the privacy theory of contextual integrity, the privacy calculus, and previous findings regarding different data types and recipients, we argue that established social norms shape the acceptance of novel practices of data collection and use. To investigate the willingness to share health data, we conducted a preregistered vignette experiment. The scenarios experimentally varied the vignette dimensions by data type, recipient, and research purpose. While some findings contradict our hypotheses, the results indicate that all three dimensions affected respondents' data sharing decisions. Additional analyses suggest that institutional and social trust, privacy concerns, technical affinity, altruism, age, and device ownership influence the willingness to share health data.
These data come from several web surveys among the German population 18 years and older who live in Germany and were eligible to vote in the 2021 federal election. Respondents were recruited from the German nonprobability online panel of Respondi/Bilendi.
In August of 2021, members of the online panel were invited through a survey-router system. For Wave 1 (30.8.2021-7.9.2021), 3,530 people were invited to the survey and 2,221 ended the survey successfully. For Wave 1B (8.9.2021-14.9.2021), only respondents from Wave 1 who reported owning a smartphone were invited. 1,803 completed the survey. For Wave 2 (14.9.2021-20.9.2021), 3,761 individuals were invited of which 2,451 completed the survey. For Wave 3 (27.9.2021-4.10.2021), 3,565 individuals were invited and 2,261 completed the survey. For Wave 4 (01.12.2021-16.12.2021), 1,945 individuals were invited and 1,092 completed the survey. The full data set includes 2,756 individuals who completed at least one of the four survey waves. For a detailed view of the longitudinal response structure, variables indicating participation in each wave are part of the dataset.
Quotas for gender, age, and state (Bundesland) were employed to generate a sample with sufficient diversity.
The questionnaire included items on political attitudes, voting preferences, political identities, news media consumption on- and offline, smartphone and social media use. The questionnaire was programmed in EFS Survey. Respondents could complete the questionnaire on a PC, tablet or smartphone.
These data come from several web surveys among the German population 18 years and older who live in Germany and were eligible to vote in the 2021 federal election. Respondents were recruited from the German nonprobability online panel of Respondi/Bilendi.
In August of 2021, members of the online panel were invited through a survey-router system. For Wave 1 (30.8.2021-7.9.2021), 3,530 people were invited to the survey and 2,221 ended the survey successfully. For Wave 1B (8.9.2021-14.9.2021), only respondents from Wave 1 who reported owning a smartphone were invited. 1,803 completed the survey. For Wave 2 (14.9.2021-20.9.2021), 3,761 individuals were invited of which 2,451 completed the survey. For Wave 3 (27.9.2021-4.10.2021), 3,565 individuals were invited and 2,261 completed the survey. For Wave 4 (01.12.2021-16.12.2021), 1,945 individuals were invited and 1,092 completed the survey. The full data set includes 2,756 individuals who completed at least one of the four survey waves. For a detailed view of the longitudinal response structure, variables indicating participation in each wave are part of the dataset.
Quotas for gender, age, and state (Bundesland) were employed to generate a sample with sufficient diversity.
The questionnaire included items on political attitudes, voting preferences, political identities, news media consumption on- and offline, smartphone and social media use. The questionnaire was programmed in EFS Survey. Respondents could complete the questionnaire on a PC, tablet or smartphone.
BACKGROUND: The COVID-19 pandemic is the greatest public health crisis of the last 100 years. Countries have responded with various levels of lockdown to save lives and stop health systems from being overwhelmed. At the same time, lockdowns entail large socioeconomic costs. One exit strategy under consideration is a mobile phone app that traces the close contacts of those infected with COVID-19. Recent research has demonstrated the theoretical effectiveness of this solution in different disease settings. However, concerns have been raised about such apps because of the potential privacy implications. This could limit the acceptability of app-based contact tracing in the general population. As the effectiveness of this approach increases strongly with app uptake, it is crucial to understand public support for this intervention. OBJECTIVE: The objective of this study is to investigate the user acceptability of a contact-tracing app in five countries hit by the pandemic. METHODS: We conducted a largescale, multicountry study (N=5995) to measure public support for the digital contact tracing of COVID-19 infections. We ran anonymous online surveys in France, Germany, Italy, the United Kingdom, and the United States. We measured intentions to use a contact-tracing app across different installation regimes (voluntary installation vs automatic installation by mobile phone providers) and studied how these intentions vary across individuals and countries. RESULTS: We found strong support for the app under both regimes, in all countries, across all subgroups of the population, and irrespective of regional-level COVID-19 mortality rates. We investigated the main factors that may hinder or facilitate uptake and found that concerns about cybersecurity and privacy, together with a lack of trust in the government, are the main barriers to adoption. CONCLUSIONS: Epidemiological evidence shows that app-based contact tracing can suppress the spread of COVID-19 if a high enough proportion of the population uses the app and that it can still reduce the number of infections if uptake is moderate. Our findings show that the willingness to install the app is very high. The available evidence suggests that app-based contact tracing may be a viable approach to control the diffusion of COVID-19.