Book review Book review In this contribution the author reviews Praktijkgericht onderzoek: ontwerp van organisatie- en beleidsonderzoek by P. Verschuren.
Book review, answer to reaction Book review, answer to reaction Answer of the author on the reaction to the book review of Praktijkgericht onderzoek: ontwerp van organisatie- en beleidsonderzoek by P. Verschuren.
Honnête par chance, une étude qualitative par entretien de la (non) coopération dans une enquête assistée par ordinateur et à réponse randomisée : La méthode " response aléatoire" (RRM) permet de minimiser les réponse d'évitement sur des questions sensibles. Cette étude examine comment des répondants participent à une RRM et comment leur participation contribue à des réponses (non)adéquates. Onze répondants, tous des clients de services sociaux, ont répondu à douze questions dans une enquête assistée par ordinateur. La technique de la réponse forcée a été utilisée comme RRN, et des dés sur ordinateur (générateur interne du hasard) ont été alternées avec des vraies dés (générateur externe du hasard). Le comportement des répondants a été enregistré par vidéo. Ceci a été suivie d'un entretien qualitatif qui a été également enregistré. Les résultats montrent que la RRM a engendré très peu de problèmes. Tous les répondants avaient des problèmes à fournir des réponses fausses exigées par les dés, et, par conséquence, quelques-uns ont trichés. De plus, leur intérêt direct - garder leurs allocations - a déterminé leur coopération puisqu'elle a dominé toutes les autres raisons, même quand les répondants comprenaient comment fonctionne marcher la RRM.
This paper discusses a survey where some respondents were asked sensitive questions directly and others were asked the same questions using randomized response. The use of randomized response was a factor in a 2 × 2 factorial design and dice were used to perform the randomization. First, the paper shows that the perturbation due to the dice can be described using the concept of misclassification and known conditional misclassification probabilities. Second, the paper formulates the likelihood for loglinear models and shows that latent class software can be used to analyse the data. An example including a power analysis is discussed.
Many issues of interest to social scientists and policy makers are of a sensitive nature in the sense that they are intrusive, stigmatizing, or incriminating to the respondent. This results in refusals to cooperate or evasive cooperation in studies using self-reports. In a seminal article, Warner (1965) proposed to curb this problem by generating an artificial variability in responses to inoculate the individual meaning of answers to sensitive questions. This procedure was further developed and extended and came to be known as the randomized response (RR) technique. Here, the authors propose a unified treatment for eliciting sensitive binary as well as quantitative information with RR based on a model where the inoculating elements are provided for by the randomization device. The procedure is simple and the authors will argue that its implementation in a computer-assisted setting may have superior practical capabilities.
Recently, the leading position of telephone surveys as the major mode of data collection has been challenged. Telephone surveys suffer from a growing nonresponse, partly due to the general nonresponse trend for all surveys and partly due to changes in society and technology influencing contactability and willingness to answer. One way to counteract the increasing nonresponse is the use of an advance letter. In mail and face-to-face surveys, advance letters have been proven effective. Based on the proven effectiveness in face-to-face and mail surveys, survey handbooks advise the use of advance letters in telephone surveys. This study reviews the evidence for this advice and presents a quantitative summary of empirical studies on the effectiveness of advance letters in raising the response rate for telephone surveys. The major conclusion is that advance letters are also an effective tool in telephone surveys, with an average increase in response rate (RR1) from 58 percent (no letter) to 66 percent (advance letter), and an average increase in cooperation rate (COOP1) from 64 percent (no letter) to 75 percent (advance letter). Adapted from the source document.
This article discusses two meta-analyses on randomized response technique (RRT) studies, the first on 6 individual validation studies and the second on 32 comparative studies. The meta-analyses focus on the performance of RRTs compared to conventional question-and-answer methods. The authors use the percentage of incorrect answers as effect size for the individual validation studies and the standardized difference score (d-probit) as effect size for the comparative studies. Results indicate that compared to other methods, randomized response designs result in more valid data. For the individual validation studies, the mean percentage of incorrect answers for the RRT condition is .38; for the other conditions, it is .49. The more sensitive the topic under investigation, the higher the validity of RRT results. However, both meta-analyses have unexplained residual variances across studies, which indicates that RRTs are not completely under the control of the researcher.