"Die Qualität von Fragebogenübersetzungen ist zentral für die Vergleichbarkeit der Daten in internationalen Umfrageprojekten. Um die Qualität der Übersetzung zu gewährleisten, werden besondere Methoden angewandt. Diese Arbeit untersucht anhand einer Fallstudie aus dem International Social Survey Programme (ISSP) die Methode des Team-Ansatzes: Parallel angefertigte Übersetzungen wurden in einem Team aus Übersetzern und Umfrageexperten diskutiert. Die Aufnahme und Transkription der Diskussion ermöglichten eine Prozessanalyse. Im Zentrum standen hierbei Probleme und Problemlösungen, Methodenbewertung und Qualitätsverständnis des Teams. Diese Arbeit richtet sich an Übersetzer, Übersetzungswissenschaftler sowie Umfrageexperten für internationale Umfrageforschung." (Autorenreferat)
In this study, we explore to what extent the visual presentation of open-ended probes, in connection with different prior probing exposure, impacts on response quality. We experiment with two text box sizes for a specific immigrant probe (Which type of immigrants were you thinking of when you answered the question?). On the one hand, we use a standard size equal to the other text box sizes in the survey but oversized for the specific response task. On the other hand, we use a smaller text box which fits the response task. The other probes in the survey that use the standard text box are mainly category-selection probes that ask for a reasoning for the chosen answer value. Due to randomized rotation of questions, respondents receive different numbers of category-selection probes prior to the immigrant probe, resulting in different degrees of exposure to category-selection probing prior to the immi-grant probe. For the immigrant probe, we find that respondents who receive the standard text box and who have had a high exposure to category-selection probing are more likely to provide mismatching answers: The mismatch consists of not answering the specific immigrant probe but rather providing a reasoning answer as typically expected for a category-selection probe. Thus, previous experience with probing in the questionnaire can override the actual probe wording. This problem can be minimized by considering possible carryover effects of prior probes and using an appropriate survey design strategy.
This article focuses on assessing item comparability in cross-national surveys by asking probing questions in Web surveys. The "civil disobedience" item from the "rights in a democracy" scale of the International Social Survey Program (ISSP) serves as a substantive case study. Identical Web surveys were fielded in Canada (English-speaking), Denmark, Germany, Hungary, Spain, and the U.S. A category-selection and a comprehension probe, respectively, were incorporated into the Web surveys after the closed-ended "civil disobedience" item. Responses to the category selection-probe reveal that notably in Germany, Hungary, and Spain the detachment of politicians from the people and their lack of responsiveness is deplored. Responses to the comprehension probe show that mainly in the U.S. and Canada violence and/or destruction are associated with civil disobedience. These results suggest reasons for the peculiar statistical results found for the "civil disobedience" item in the ISSP study. On the whole, Web probing proves to be a valuable tool for identifying interpretation differences and potential bias in cross-national survey research.
This article examines the use of probing techniques in web surveys to identify validity problems of items. Conventional cognitive interviewing is usually based on small sample sizes and thus precludes quantifying the findings in a meaningful way or testing small or special subpopulations characterized by their response behavior. This article investigates probing in web surveys as a supplementary way to look at item validity. Data come from a web survey in which respondents were asked to give reasons for selecting a response category for a closed question. The web study was conducted in Germany, with respondents drawn from online panels (n = 1,023). The usefulness of the proposed approach is shown by revealing validity problems with a gender ideology item.
Cognitive interviewing is a well-established method for evaluating and improving a questionnaire prior to fielding. However, its present implementation brings with it some challenges, notably in terms of small sample sizes or the possibility of interviewer effects. In this study, the authors test web surveys through nonprobability online panels as a supplemental means to implement cognitive interviewing techniques. The overall goal is to tackle the above-mentioned challenges. The focus in this article is on methodological features that pave the way for an eventual successful implementation of category-selection probing in web surveys. The study reports on the results of 1,023 respondents from Germany. In order to identify implementation features that lead to a high number of meaningful answers, the authors explore the effects of (1) different panels, (2) different probing variants, and (3) different numbers of preceding probes on answer quality. The overall results suggest that category-selection probing can indeed be implemented in web surveys. Using data from two panels - a community panel where members can actively get involved, for example, by creating their own polls, and a "conventional" panel where answering surveys is the members' only activity - the authors find that high community involvement does not increase the likelihood to answer probes or produce longer statements. Testing three probing variants that differ in wording and provided context, the authors find that presenting the context of the probe (i.e., the probed item and the respondent's answer) produces a higher number of meaningful answers. Finally, the likelihood to answer a probe decreases with the number of preceding probes. However, the word count of those who eventually answer the probes slightly increases with an increasing number of probes. (author's abstract)
Survey documentation is an integral part of methodically sound survey research. These guidelines aim at providing the persons coordinating survey translations (e.g., researchers responsible for survey translation in a larger study, or those wishing to translate and adapt an existing instrument for their own research) with a framework within which they can plan and document survey translations both for internal as well as for external purposes (publications or technical reports). It summarizes different aspects of translation documentation and reviews elements to be included in such a documentation.
The panel study ^"Labor Market and Social Security" (PASS), established by the Institute for Employment Research, is a central data source for research on unemployment, poverty, and the welfare state in Germany. Since 2007, approximately 12,000 persons in more than 8,000 households have been interviewed annually. The study's design particularly allows for evaluating the situation of recipients of basic income support, called unemployment benefits II (UB II) in Germany. With the influx of refugees to Germany, the structure of UB II recipients changed and as a consequence Arabic-speaking persons needed to be integrated into PASS. In 2016, that is, in PASS wave 10, we were faced with several challenges to meet the needs of this new population. This paper focuses on fieldwork issues: How did we access the special population of Arabic-speaking respondents and which contact strategies were necessary to reach refugees? And how did we manage to conduct the interviews given the respondents' different cultural backgrounds and their special living conditions as refugees in Germany? The sampling structure of PASS allows for comparing the refugee sample and the general UB II recipients. A great willingness to participate and high motivation was apparent among the refugees. It was possible to obtain access to the refugee households both by telephone and face-to-face. Whereas non-response occurred to a larger extent in the general refresher sample due to refusals, in the refugee sample it was caused by lower reachability.
Quantitative questionnaires are increasingly applied across different languages and cultures. They play an important role in research on immigrant communities as well as in surveys of the general population of which immigrants are a significant part. Because immigrants have often limited proficiency of the language of the host country, usually questionnaires in their first language ("mother tongue") are needed. For this purpose, it is common to use questionnaires developed or adapted for the population of the respective countries which immigrants originate from. The applicability of these questionnaires, however, may be limited because of differences between both populations in terms of language usage, amongst others resulting from language attrition. This, for example, can be illustrated by means of the Revised Illness Perception Questionnaire (IPQ-R) applied to Turks in Turkey and Turkish immigrants residing in Germany. Questionnaires, therefore, must be thoroughly tested and often re-adapted to the language style and level of language proficiency of immigrants to avoid different forms of bias. An analytical framework of questionnaire equivalence can guide the process of testing and re-adaptation.
The IAB-BAMF-SOEP Survey of Refugees is one of the first large-scale quantitative surveys in Germany focusing on refugees exclusively. It is able to provide valuable insights on the recent cohort of refugees who arrived in Germany as of the year 2013. However, due to the fact that most respondents of the target population are not proficient in German, the research partners who conducted the survey faced several obstacles. One crucial aspect in this regard was the written and audio translation of field instruments. Therefore, this paper gives some insights into the translation and fielding procedures and presents numbers on the use of written and audio translations in the IAB-BAMF-SOEP Survey of Refugees. We found that especially a written translation was very helpful, but also the audio files proved to facilitate participation in the survey. However, due to the fact that those tools are new and innovative, further research on their effects on survey quality needs to be carried out.
The survey is part of a larger research project, funded by the German Research Foundation, that aims to better understand translation procedures in cross-national studies, notably the initial translation step prior to any review checks. This survey is part of the first project phase: Between 6 March and 11 May 2023, we conducted a web survey among 153 participating country teams of 13 large-scale survey programs to study the status quo, including the diversity, of translation procedures in different social science survey programs throughout the world. Amongst other things, we asked who translated in the most recent round of a study, and for the reasons of translator selection? What was the quality of the (initial) translations? Which review and checking steps were implemented? What kind of translation tools were used, if at all? What kind of support material was provided to translators? What are general attitudes and views towards translation and professional translators?
Due to the growing significance of international studies, the need for tools to assess the equivalence of items is pressing. Web probing, which is implementing verbal probing techniques traditionally used in cognitive interviewing in online surveys, is a method to complement quantitative techniques to establish equivalence of items in crosscultural research. We illustrate this approach by assessing the question of 'how important it is that government authorities respect and protect the rights of minorities', which was originally used in the International Social Survey Program, for respondents in five countries (Germany, Britain, the U.S., Mexico, and Spain). First, participants answered this question using a 7-point Likert scale. Then they wrote freely what types of minorities they had thought of. Whether country differences in the response patterns can be interpreted substantially depends partially on how similarly the term 'minorities' is understood across these five contexts. Our results show that people in the participating countries have slightly different kinds of 'minorities' in mind.
Due to the growing significance of international studies, the need for tools to assess the equivalence of items is pressing. Web probing, which is implementing verbal probing techniques traditionally used in cognitive interviewing in online surveys, is a method to complement quantitative techniques to establish equivalence of items in crosscultural research. We illustrate this approach by assessing the question of 'how important it is that government authorities respect and protect the rights of minorities', which was originally used in the International Social Survey Program, for respondents in five countries (Germany, Britain, the U.S., Mexico, and Spain). First, participants answered this question using a 7-point Likert scale. Then they wrote freely what types of minorities they had thought of. Whether country differences in the response patterns can be interpreted substantially depends partially on how similarly the term 'minorities' is understood across these five contexts. Our results show that people in the participating countries have slightly different kinds of 'minorities' in mind.
Due to the growing significance of international studies, the need for tools to assess the equivalence of items is pressing. Web probing, which is implementing verbal probing techniques traditionally used in cognitive interviewing in online surveys, is a method to complement quantitative techniques to establish equivalence of items in crosscultural research. We illustrate this approach by assessing the question of 'how important it is that government authorities respect and protect the rights of minorities', which was originally used in the International Social Survey Program, for respondents in five countries (Germany, Britain, the U.S., Mexico, and Spain). First, participants answered this question using a 7-point Likert scale. Then they wrote freely what types of minorities they had thought of. Whether country differences in the response patterns can be interpreted substantially depends partially on how similarly the term 'minorities' is understood across these five contexts. Our results show that people in the participating countries have slightly different kinds of 'minorities' in mind.