"Mixed mode designs are receiving increased interest as a possible solution for saving costs in panel surveys, although the lasting effects on data quality are unknown. To better understand the effects of mixed mode designs on panel data we will examine its impact on random and systematic error and on estimates of change. The SF12, a health scale, in the Understanding Society Innovation Panel is used for the analysis. Results indicate that only one variable out of 12 has systematic differences due to the mixed mode design. Also, four of the 12 items overestimate variance of change in time in the mixed mode design. We conclude that using a mixed mode approach leads to minor measurement differences but it can result in the overestimation of individual change compared to a single mode design." (author's abstract)
Mixed-mode designs are increasingly important in surveys, and large longitudinal studies are progressively moving to or considering such a design. In this context, our knowledge regarding the impact of mixing modes on data quality indicators in longitudinal studies is sparse. This study tries to ameliorate this situation by taking advantage of a quasi-experimental design in a longitudinal survey. Using models that estimate reliability for repeated measures, quasi-simplex models, 33 variables are analyzed by comparing a single-mode CAPI design to a sequential CATI-CAPI design. Results show no differences in reliabilities and stabilities across mixed modes either in the wave when the switch was made or in the subsequent waves. Implications and limitations are discussed.
Abstract Adaptive survey design has been proposed as a solution to the decreasing response rates and higher costs associated with surveys. Much of the adaptive survey design research to date focuses on cross-sectional surveys, yet it seems there is greater potential for this approach within longitudinal surveys as data on the respondent and their survey experience builds each wave. We consider the short- to medium-term impacts of modifications to fieldwork processes in the context of two household panels. We use waves 11–16 of the Household, Income and Labour Dynamics in Australia (HILDA) Survey and waves 1–6 of the UK Household Longitudinal Study (also known as Understanding Society) to simulate eight alternative follow-up strategies. Cases are targeted based on their likelihood to improve sample balance (representativeness), as measured by the R-indicator, or to provide a response, or a combination of these components. Both individual- and household-level targeting are considered. We assess the extent to which these adjustments to fieldwork efforts impact the response rates, sample representativity, and cost. We find that if the follow-up fieldwork effort were reduced by 25 percent the least detrimental strategy is where the best households in terms of improving the R-indicator or the response rates are issued to field for follow-up. This approach resulted in the same sample balance as obtained with full follow-up practice, saved between 17 and 25 percent of follow-up calls but dropped the full balanced panel response rate over four years by 8–12 percentage points.
Abstract Collecting sensitive data using surveys is one of the most challenging tasks facing survey methodologists as people may choose to answer questions untruthfully to present themselves in a positive light. In 2014, Mneimneh et al. proposed mixed Rasch models to detect socially desirable answering behaviors. This approach combines item response theory models with latent class analysis to differentiate substantive and biased answering patterns. Their results identified two latent classes, one of which was consistent with socially desirable answering. Our aim is to expand their approach to detecting social desirability by using a mixture confirmatory factor analysis (CFA) in round 7 of the European Social Survey. First, we attempt to estimate social desirability in three constructs separately (RQ1): effect of immigration on the country, allowing people to come in the country and social connection, using a mixture CFA. We then extend the analysis by (RQ2) introducing constraints between the latent classes, (RQ3) combining different constructs in one model, and (RQ4) comparing results in Belgium and the United Kingdom. In contrast with the paper published by Mneimneh et al. in 2014, the models with two latent classes do not have the best model fit. In addition, validation with the presence of a third person, the respondent's reluctance to give answers and personality traits are not systematically in line with our expectations. A small simulation shows that the method would work if the data would behave as we expect, with social desirability being the main factor influencing answering patterns. We conclude that a mixture CFA might not be able to identify social desirability in different survey contexts, especially in complex data as originating in cross-national social surveys.
Abstract Time and cost pressures, the availability of alternative sources of data, and societal changes are leading to a move from traditional face-to-face surveys to web or mixed-mode data collection. While we know that there are mode differences between web and face-to-face (presence of an interviewer or not, type of stimuli, etc.), it is not clear to what extent these differences could threaten the comparability of data collected in face-to-face and web surveys. In this article, we investigate the differences in measurement quality between the European Social Survey (ESS) Round 8 and the CROss-National Online Survey (CRONOS) panel. We address three main research questions: (1) Do we observe differences in terms of measurement quality across face-to-face and web for the same people and questions? (2) Can we explain individual-level differences in data quality using respondents' characteristics? and (3) Does measurement equivalence (metric and scalar) hold across the ESS Round 8 and the CRONOS panel? The results suggest that: (1) in terms of data quality, the measurement mode effect between web and face-to-face as implemented in the ESS (i.e., using show cards) is not very large, (2) none of the variables considered consistently explain individual differences in mode effects, and (3) measurement equivalence often holds for the topics studied.
Abstract: Motivation and burden are two of the most important aspects that influence response rates and dropouts in online surveys. As a result, we focus our analyses on how pictures and Best Worst Scaling (BWS), two solutions for each problem, interact in the Web medium. We use an experimental design that compares a BWS with pictures, the experimental group, and BWS without pictures, the control group. Results show that pictures influence measurement of BWS in six out of 16 items. We also observe that Couper's (2001) conclusion that concordant text and images have an accentuation effect while a discordant relationship between the two has an interference impact is partly true in our data. Eight out of the 16 items are at least partially influenced by the concordant/discordant variable while four fully respect this model. We conclude by discussing the impact of our findings and its limitations.
Longitudinal data is essential for understanding how the world around us changes. Most theories in the social sciences and elsewhere have a focus on change, be it of individuals, of countries, of organisations, or of systems, and this is reflected in the myriad of longitudinal data that are being collected using large panel surveys. This type of data collection has been made easier in the age of Big Data and with the rise of social media. Yet our measurements of the world are often imperfect, and longitudinal data is vulnerable to measurement errors which can lead to flawed and misleading conclusions. This book tackles the important issue of how to investigate change in the context of imperfect data.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
In: Anuarul Institutului de Cercetări Socio-Umane "C. S. Nicolăescu-Plopșor": "C. S. Nicolăescu-Plopșor" Institute for Research in Social Studies and Humanities yearbook, Heft XXII/2022, S. 149-159
"The end of the 19th century, more precisely the year 1890, represented for the people of Craiova the opportunity to witness a new royal visit, this time a very special one, because for the first time the Crown Prince Ferdinand visited the city of Craiova with his uncle, King Carol I. The visit lasted 3 days, 1–3 October, being perhaps one of the most eventful visits, at least in terms of the very busy schedule. Many schools were visited, both girls' and boys' schools, as well as hospitals or prisons were among the objectives of the visit. The Craiova society was involved in the big event, with demonstrations taking place in which many people participated. There was also criticism in the anti-dynastic press of the time, many aspects of the organization of the visit being attacked. The visit remains one of reference for Craiova, the magnitude of the event echoing in the press and the documents of the time. "
Abstract Longitudinal surveys are a key data collection tool used to estimate social change. Recent developments have accelerated the move from traditional single-mode longitudinal designs to mixed-mode designs. Nevertheless, there are concerns that mixing survey modes may affect coefficients of change at the individual level. We investigate the impact of mixing survey modes on estimates of change using a quasi-experimental design implemented in a long-running UK panel study. Two types of comparisons are carried out: single-mode (face-to-face) design versus sequential mixed-mode (Web–face-to-face) design, and Web versus face to face. Across 41 variables, we find no differences in estimates of individual-level change across modes (designs). However, correlations between intercepts and slopes, an estimate of convergence of respondents, were significantly different for most variables, which led to some biases in estimates of change. Applied researchers are encouraged to do sensitivity checks to ensure their results are robust to mode effects.
Biological measurements (or biomeasures) are increasingly being collected in large longitudinal biosocial surveys, enabling researchers to exploit the advantages of social science data with objective health measures to better understand how health and social behaviour interact over time. However, not all survey respondents are willing to take part in the biomeasure component of biosocial surveys, even when the measures are administered by certified medical professionals, such as nurses. Thus, understanding factors which affect participation in biomeasure collection is essential for making valid biosocial inferences about the population. Previous research has shown that interviewer continuity can be useful for optimizing longitudinal survey participation, but it is yet unknown if nurse continuity impacts the likelihood of participation in biomeasure collection. We investigated the impact of nurse continuity on nonresponse to biomeasure collection in waves 4 and 6 of the English Longitudinal Study of Ageing (ELSA). Using cross-classified multilevel models, we find that switching nurses between waves does not negatively impact participation in biomeasure collection, and sometimes can be beneficial, particularly for previous wave nonrespondents. The practical implication is that biosocial surveys may not need to employ strict nurse continuity protocols to maximize participation in subsequent waves of biomeasure data collection.
Abstract Cross-national surveys run the risk of differential survey errors, where data collected vary in quality from country to country. Responsive and adaptive survey designs (RASDs) have been proposed as a way to reduce survey errors, by leveraging auxiliary variables to inform fieldwork efforts, but have rarely been considered in the context of cross-national surveys. Using data from the European Social Survey, we simulate fieldwork in a repeated cross-national survey using RASD where fieldwork efforts are ended early for selected units in the final stage of data collection. Demographic variables, paradata (interviewer observations), and contact data are used to inform fieldwork efforts. Eight combinations of response propensity models and selection mechanisms are evaluated in terms of sample composition (as measured by the coefficient of variation of response propensities), response rates, number of contact attempts saved, and effects on estimates of target variables in the survey. We find that sample balance can be improved in many country-round combinations. Response rates can be increased marginally and targeting high propensity respondents could lead to significant cost savings associated with making fewer contact attempts. Estimates of target variables are not changed by the case prioritizations used in the simulations, indicating that they do not impact nonresponse bias. We conclude that RASDs should be considered in cross-national surveys, but that more work is needed to identify suitable covariates to inform fieldwork efforts.
Abstract Biosocial surveys increasingly use interviewers to collect objective physical health measures (or "biomeasures") in respondents' homes. While interviewers play an important role, their high involvement can lead to unintended interviewer effects on the collected measurements. Such interviewer effects add uncertainty to population estimates and have the potential to lead to erroneous inferences. This study examines interviewer effects on the measurement of physical performance in a cross-national and longitudinal setting using data from the Survey of Health, Ageing and Retirement in Europe. The analyzed biomeasures exhibited moderate-to-large interviewer effects on the measurements, which varied across biomeasure types and across countries. Our findings demonstrate the necessity to better understand the origin of interviewer-related measurement errors in biomeasure collection and account for these errors in statistical analyses of biomeasure data.
Mixed-mode surveys are popular as they can save costs and maintain (or improve) response rates relative to single-mode surveys. Nevertheless, it is not yet clear how design decisions like survey mode or questionnaire length impact measurement quality. In this study, we compare measurement quality in an experiment of three distinct survey designs implemented in the German sample of the European Values Study: a single-mode face-to-face design, a mixed-mode mail/web design, and a shorter (matrix) questionnaire in the mixed-mode design. We compare measurement quality in different ways, including differences in distributions across several data quality indicators as well as equivalence testing over 140 items in 25 attitudinal scales. We find similar data quality across the survey designs, although the mixed-mode survey shows more item nonresponse compared to the single-mode survey. Using equivalence testing we find that most scales achieve metric equivalence and, to a lesser extent, scalar equivalence across the designs.