Interaction and the Standardized Survey Interview: The Living Questionnaire
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 65, Heft 3, S. 420-422
ISSN: 0033-362X
15 Ergebnisse
Sortierung:
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 65, Heft 3, S. 420-422
ISSN: 0033-362X
In: Annual review of sociology, Band 46, Heft 1, S. 37-60
ISSN: 1545-2115
In recent decades, research about survey questions has emphasized decision-based approaches. Current research focuses on identifying and systematizing characteristics of questions that are key in researchers' decisions. We describe important classes of decisions and question characteristics: topic, type of question (e.g., event or behavior, evaluation or judgment), response dimension (e.g., occurrence, frequency, intensity), conceptualization and operationalization of the target object (e.g., how to label the object being asked about and the response dimension), question structure (e.g., use of a filter question, placement in a battery), question form or response format (e.g., yes–no, selection from ordered categories, choice from a list, discrete value), response categories, question wording, and question implementation. We use the framework of question characteristics to summarize key results in active research areas and provide practical recommendations. Progress depends on recognizing how question characteristics co-occur, using a variety of methods and appropriate models, and implementing study designs with strong criteria.
In: The public opinion quarterly: POQ, Band 75, Heft 5, S. 909-961
ISSN: 1537-5331
We begin with a look back at the field to identify themes of recent research that we expect to continue to occupy researchers in the future. As part of this overview, we characterize the themes and topics examined in research about measurement and survey questions published in Public Opinion Quarterly in the past decade. We then characterize the field more broadly by highlighting topics that we expect to continue or to grow in importance, including the relationship between survey questions and the total survey error perspective, cognitive versus interactional approaches, interviewing practices, mode and technology, visual aspects of question design, and culture. Considering avenues for future research, we advocate for a decision-oriented framework for thinking about survey questions and their characteristics. The approach we propose distinguishes among various aspects of question characteristics, including question topic, question type and response dimension, conceptualization and operationalization of the target object, question structure, question form, response categories, question implementation, and question wording. Thinking about question characteristics more systematically would allow study designs to take into account relationships among these characteristics and identify gaps in current knowledge. Adapted from the source document.
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 75, Heft 5, S. 909-909
ISSN: 0033-362X
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 65, Heft 3, S. 420-422
ISSN: 0033-362X
In: Journal of survey statistics and methodology: JSSAM, Band 10, Heft 2, S. 276-298
ISSN: 2325-0992
Abstract
Interviewers' ratings of survey respondents' health (IRH) are a promising measure of health to include in surveys as a complementary measure to self-rated health. However, our understanding of the factors contributing to IRH remains incomplete. This is the first study to examine whether and how it matters when in the interview interviewers evaluate respondents' health in a face-to-face survey, in an experiment embedded in the UK Innovation Panel Study. We find that interviewers are more likely to rate the respondent's health as "excellent" when IRH is rated at the end of the interview compared to the beginning. Drawing from the continuum model of impression formation, we examined whether associations between IRH and relevant covariates vary depending on placement in interview. We find that across several characteristics of interviewers and respondents, only the number of interviews completed by interviewers varies by IRH assessment location in its effect on IRH. We also find evidence that interviewer variance is lower when IRH is assessed prior to compared to after the interview. Finally, the location of IRH assessment does not impact the concurrent or predictive validity of IRH. Overall, the results suggest that in a general population study with some health questions, there may be benefits to having interviewers rate respondents' health at the beginning of the interview (rather than at the end as in prior research) in terms of lower interviewer variance, particularly in the absence of interviewer training that mitigates the impact of within-study experience on IRH assessments.
In: Social science computer review: SSCORE, Band 42, Heft 3, S. 777-793
ISSN: 1552-8286
Many studies rely on traditional web survey methods in which all contacts with sample members are through email and the questionnaire is administered exclusively online. Because it is difficult to effectively administer prepaid incentives via email, researchers frequently employ lotteries or prize draws as incentives even though their influence on survey participation is small. The current study examines whether a prize draw is more effective if it is divided into a few larger amounts versus several smaller amounts and compares prize draws to a small but guaranteed postpaid incentive. Data are from the 2019 Campus Climate Survey on Sexual Assault and Sexual Misconduct. Sample members include 38,434 undergraduate and graduate students at a large Midwestern university who were randomly assigned to receive: a guaranteed $5 Amazon gift card; entry into a high-payout drawing for one of four $500 prizes; or entry into a low-payout drawing for one of twenty $100 prizes. Results indicate the guaranteed incentive increased response rates, with no difference between the prize draws. While results from various data quality outcomes show the guaranteed incentive reduced break-off rates and the high-payout drawing increased item nonresponse, there were no differences across incentive conditions in rates of speeding, reporting of sensitive data, straightlining, or sample representativeness. As expected, the prize draws had much lower overall and per complete costs.
In: APSA 2013 Annual Meeting Paper
SSRN
Working paper
In: The public opinion quarterly: POQ, S. nfw045
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research
ISSN: 0033-362X
In: Journal of survey statistics and methodology: JSSAM, Band 6, Heft 1, S. 122-148
ISSN: 2325-0992
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 76, Heft 2, S. 311-311
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 87, Heft S1, S. 480-506
ISSN: 1537-5331
Abstract
Interviewers' postinterview evaluations of respondents' performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents' and interviewers' sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process—various respondents' behaviors and response quality indicators—are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents' and interviewers' sociodemographic characteristics. Our results show that both respondents' behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers' gender. Further, IEPs were associated with respondents' education and ethnoracial identity, net of respondents' behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.
In: The public opinion quarterly: POQ, Band 79, Heft 4, S. 906-931
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 79, Heft 4, S. 906
ISSN: 0033-362X