Envisioning the survey interview of the future
In: Wiley series in survey methodology
16 Ergebnisse
Sortierung:
In: Wiley series in survey methodology
In: The public opinion quarterly: POQ, Band 64, Heft 1, S. 1-28
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 64, Heft 1, S. 1-28
ISSN: 0033-362X
Contrasts two interviewing techniques that reflect different tacit assumptions about communication. With strictly standardized interviewing, interviewers leave the interpretation of questions up to respondents (Rs). With conversational interviewing, interviewers say whatever it takes to make sure that questions are interpreted uniformly & as intended. A US national sample of 227 adults were interviewed twice regarding housing & recent purchases. The first interview was strictly standardized; the second was standardized for 50% of Rs & conversational for the others. Rs in a second conversational interview answered differently than in the first interview more often & for reasons that conformed more closely to official definitions than dis Rs in a second standardized interview. This suggests that conversational interviewing improved comprehension, although it also lengthened interviews. It is concluded that Rs in a national sample may misinterpret certain questions frequently enough to compromise data quality, & such misunderstandings cannot easily be eliminated by pretesting & rewording questions alone. More standardized comprehension may require less standardized interviewer behavior. 1 Table, 2 Appendixes, 36 References. Adapted from the source document.
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 64, Heft 1, S. 1-28
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 61, Heft 4, S. 576
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 61, Heft 4, S. 576-602
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 85, Heft S1, S. 253-263
ISSN: 1537-5331
In: The public opinion quarterly: POQ, Band 77, Heft 4, S. 888-935
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 77, Heft 4, S. 888-887
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 80, Heft 1, S. 180-211
ISSN: 1537-5331
This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in "face-to-face" interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement—acknowledgments (verbal and visual) and smiles—and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.
BASE
In: The public opinion quarterly: POQ, Band 78, Heft 4, S. 779-787
ISSN: 1537-5331
In: Journal of survey statistics and methodology: JSSAM, Band 10, Heft 2, S. 317-336
ISSN: 2325-0992
Abstract
Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes—such as LV interviewing—that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent–interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey ("prerecorded video" interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing.
In: Methods, data, analyses: mda ; journal for quantitative methods and survey methodology, Band 17, Heft 2, S. 135-170
ISSN: 2190-4936
This study investigates the extent to which video technologies - now ubiquitous - might be useful for survey measurement. We compare respondents' performance and experience (n = 1,067) in live video-mediated interviews, a web survey in which prerecorded interviewers read questions, and a conventional (textual) web survey. Compared to web survey respondents, those interviewed via live video were less likely to select the same response for all statements in a battery (non-differentiation) and reported higher satisfaction with their experience but provided more rounded numerical (presumably less thoughtful) answers and selected answers that were less sensitive (more socially desirable). This suggests the presence of a live interviewer, even if mediated, can keep respondents motivated and conscientious but may introduce time pressure - a likely reason for increased rounding - and social presence - a likely reason for more socially desirable responding. Respondents "interviewed" by a prerecorded interviewer, rounded fewer numerical answers and responded more candidly than did those in the other modes, but engaged in non-differentiation more than did live video respondents, suggesting there are advantages and disadvantages for both video modes. Both live and prerecorded video seem potentially viable for use in production surveys and may be especially valuable when in-person interviews are not feasible.
In: The public opinion quarterly: POQ, Band 81, Heft S1, S. 307-337
ISSN: 1537-5331