Introduction to online surveys -- Developing the survey : questions and answers -- Ethical considerations -- Sampling -- Using a panel in your research -- Comparative survey research -- Incentives for respondents -- Selecting survey software -- Programming the survey -- Fieldwork -- Processing and cleaning the data -- Weighting survey data -- Reporting survey results -- Making data available to others -- The future of Web surveys
Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.
"Straightlining, an indicator of satisficing, refers to giving the same answer in a series of questions arranged on a grid. We investigated whether straightlining changes with respondents' panel experience in two open-access Internet panels in the Netherlands: the LISS and Dutch Immigrant panels. Specifically, we considered straightlining on 10 grid questions in LISS core modules (7 waves) and on a grid of evaluation questions in both the LISS panel (150+ waves) and the Dutch immigrant panel (50+ waves). For both core modules and evaluation questions we found that straightlining increases with respondents' panel experience for at least three years. Straightlining is also associated with younger age and non-western 1st generation immigrants. Where straightlining was a plausible set of answers, prevalence of straightlining was much larger (15-40%) than where straightlining was implausible (<2% in wave 1)." (author's abstract)
The goal of this research was to determine the best way to present mixed-device surveys. We investigate the effect of survey method (messenger versus regular survey), answer scale, device used, and personal characteristics such as gender, age and education on break-off rate, substantive answers, completion time and respondents' evaluation of the survey. Our research does not suggest that a messenger survey affects mixed-device surveys positively. Further research is necessary to investigate how to optimally present mixed-device surveys in order to increase participation and data quality.
Pictures used to supplement survey questions can systematically influence the answers obtained. Respondents react to the content of the image, giving higher-frequency reports when pictures of high-frequency events are shown and lower-frequency reports when pictures of low-frequency events are shown. The effects of pictures on responses are similar to those of verbal instructions (i.e., they produce an assimilation effect). Our results show that verbal and visual language both have independent effects as well as interact with each other. Verbal instructions have stronger effects than the visual effects produced by pictures, however, and can be used to counteract the visual context effects. We find that respondents pay more attention to verbal instruction when the verbal and visual cues are inconsistent with each other. This article provides evidence for a hierarchy of features that respondents attend to, with verbal language taking precedence over visual cues like pictures. Effective question writing, with verbal instructions making the question clear to respondents, reduces visual context effects. We found little evidence that conditions with pictures were evaluated better than conditions without pictures. Adapted from the source document.
The increasing use of smartphones opens up opportunities for novel ways of survey data collection, but also poses new challenges. Collecting more and different types of data means that studies can become increasingly intrusive. We risk over-asking participants, leading to nonresponse. This study documents nonresponse and nonresponse bias in a smartphone-only version of the Dutch Time Use Survey (TUS). Respondents from the Dutch LISS panel were asked to perform five sets of tasks to complete the whole TUS: 1) accept an invitation to participate in the study and install an app, 2) fill out a questionnaire on the web, 3) participate in the smartphone time use diary on their smartphone, 4) answer pop-up questions and 5) give permission to record sensor data (GPS locations and call data). Results show that 42.9% of invited panel members responded positively to the invitation to participate in a smartphone survey. However, only 28.9% of these willing panel members completed all stages of the study. Predictors of nonresponse are somewhat different at every stage. In addition, respondents who complete all smartphone tasks are different from groups who do not participate at some or any stage of the study. By using data collected in previous waves we show that nonresponse leads to nonresponse bias in estimates of time use. We conclude by discussing implications for using smartphone apps in survey research.