Mixed-mode surveys have been around since the late 1980s. In the past thirty years, major changes in technology and society influenced and changed data collection and survey methodology. However, in those years, mixed-mode strategies remained part of the daily survey practice, although the type of mix implemented followed the changes in technology and data collection methods. In this paper, I summarize the state of the art in traditional mixed-mode surveys and discuss implications for mixed device surveys.
"One of the methods for evaluating online panels in terms of data quality is comparing the estimates that the panels provide with benchmark sources. For probability-based online panels, high-quality surveys or government statistics can be used as references. If differences among the benchmark and the online panel estimates are found, these can have several causes. First, the question wordings can differ between the sources, which can lead to differences in measurement. Second, the reference and the online panel may not be comparable in terms of sample composition. Finally, since the reference estimates are usually collected face-to-face or by telephone, mode effects might be expected. In this article, we investigate mode system effects, an alternative to mode effects that does not focus solely on measurement differences between the modes, but also incorporates survey design features into the comparison. The data from a probability-based offline-recruited online panel is compared to the data from two face-to-face surveys with almost identical recruitment protocols. In the analysis, the distinction is made between factual and attitudinal questions. We report both effect sizes of the differences and significances. The results show that the online panel differs from face-to-face surveys in both attitudinal and factual measures. However, the reference surveys only differ in attitudinal measures and show no significant differences for factual questions. We attribute this to the instability of attitudes and thus show the importance of triangulation and using two surveys of the same mode for comparison." (author's abstract)
With the decrease of landline phones in the last decade, telephone survey methodologists face a new challenge to overcome coverage bias. In this study we investigate coverage error for telephone surveys in Europe over time and compare two situations: classical surveys that rely on landline only with surveys that also include mobile phones. We analyzed Eurobarometer data, which are collected by means of face-to-face interviews and contain information on ownership of landline and mobile phones. We show that for the period 2000-2009, time has a significant effect on both mobile phone penetration and coverage bias. In addition, the countries' development significantly affects the pace of these changes.
Declining response rates worldwide have stimulated interest in understanding what may be influencing this decline and how it varies across countries and survey populations. In this paper, we describe the development and validation of a short 9-item survey attitude scale that measures three important constructs, thought by many scholars to be related to decisions to participate in surveys, that is, survey enjoyment, survey value, and survey burden. The survey attitude scale is based on a literature review of earlier work by multiple authors. Our overarching goal with this study is to develop and validate a concise and effective measure of how individuals feel about responding to surveys that can be implemented in surveys and panels to understand the willingness to participate in surveys and improve survey effectiveness. The research questions relate to factor structure, measurement equivalence, reliability, and predictive validity of the survey attitude scale. The data came from three probability-based panels: the German GESIS and PPSM panels and the Dutch LISS panel. The survey attitude scale proved to have a replicable three-dimensional factor structure (survey enjoyment, survey value, and survey burden). Partial scalar measurement equivalence was established across three panels that employed two languages (German and Dutch) and three measurement modes (web, telephone, and paper mail). For all three dimensions of the survey attitude scale, the reliability of the corresponding subscales (enjoyment, value, and burden) was satisfactory. Furthermore, the scales correlated with survey response in the expected directions, indicating predictive validity.
In: Nonresponse in survey research : proceedings of the Eighth International Workshop on Household Survey Nonresponse, 24-16 September 1997, S. 173-185
Bei Telephoninterviews haben Interviewer weitaus weniger Zeit als in face-to-face-Interviews, einen Befragten zur Kooperation zubringen. Da sie ihren Gesprächspartner nur hören können, fehlt es ihnen auch an Informationen, um sich selbst optimal zu verhalten. Gleichwohl verfügen erfahrene Telephoninterviewer über einen Vorrat an taktischen Varianten, die sie einsetzen können. In diesem Beitrag werden Taktiken beschrieben, mit denen sich Antwortverweigerung bekämpfen lässt und die von erfahrenen Telephoninterviewern bei Statistics Netherlands angewandt werden. (ICEÜbers)
In: Nonresponse in survey research : proceedings of the Eighth International Workshop on Household Survey Nonresponse, 24-16 September 1997, S. 239-248
Antwortverweigerung stellt eine Bedrohung für die Gültigkeit von Schlussfolgerungen dar, die auf der Basis von Umfragedaten gezogen werden. Im allgemeinen werden hier zwei Gegenstrategien eingesetzt. Die erste Strategie besteht darin, den Anteil der Antwortverweigerer soweit wie möglich zu reduzieren, die zweite Strategie versucht eine statistische Fehlerkorrektur der Antwortverweigerung. Umfragen sind in der amtlichen Statistik, in den Sozialwissenschaften und in der Marktforschung in den Niederlanden immer noch an der Tagesordnung, und Interviewer stellen im Kampf gegen Antwortverweigerung einen wichtigen Faktor dar. Die Verfasser fragen nach der Einstellung von Interviewern zu Antwortverweigerung und zu deren eigener Rolle, wenn es darum geht, Menschen für ein Interview zu gewinnen. Statistics Netherlands hat ein Projekt zur Überarbeitung der fortlaufenden Untersuchung der Lebensverhältnisse (POLS) durchgeführt. In diesem Zusammenhang wurden auch Daten über die Interviewer erhoben. Es zeigt sich, dass die Einstellung der Interviewer und die Antwortquote korrelieren. Interviewer mit einer positiven Einstellung zu Strategien der Überzeugung von zu Befragenden weisen eine höhere Antwortquote auf. Hinsichtlich der von den Interviewern beschriebenen Verhaltensmuster "an der Haustür" ergeben sich keine Unterschiede. (ICEÜbers)