Search results
Filter
11 results
Sort by:
A Brief Note on the Standard Error of the Pearson Correlation
In: Collabra: Psychology, Volume 9, Issue 1
The product-moment correlation is a central statistic in psychological research including meta-analysis. Unfortunately, it has a rather complex sampling distribution which leads to sample correlations that are biased indicators of the respective population correlations. Moreover, there seems to be some uncertainty on how to properly calculate the standard error of these correlations. Because no simple analytical solution exists, several approximations have been previously introduced. This note aims to briefly summarize 10 different ways to calculate the standard error of the Pearson correlation. Moreover, a simulation study on the accuracy of these estimators compared their relative percentage biases for different population correlations and sample sizes. The results showed that all estimators were largely unbiased for sample sizes of at least 40. For smaller samples, a simple approximation by Bonett (2008) led to the least biased results. Based on these results, it is recommended to use the expression (1-r²) / √N-3 for the calculation of the standard error of the Pearson correlation.
Attitudes towards emergent autonomous robots in Austria and Germany
In: e & i Elektrotechnik und Informationstechnik, Volume 136, Issue 7, p. 296-300
Autonomous robotic systems are an indispensable component of work in many industries that are on the brink of entering many other areas of people's lives such as transportation or healthcare. Because attitudes towards new technologies shape consumers' decision to adopt these innovations, the present study examines the public opinion toward emergent robotic systems in Austria and Germany. The results showed that, in general, attitudes seemed rather positive in both countries. However, for Austria a rather ambivalent picture emerged: although Austrians exhibited the largest adoption rate of robotic technologies in Europe, at the same time they evaluated robots most unfavorably as compared to other European countries. Thus, experiences with robots seemed to have intensified potential downsides of automation that resulted in more concerns regarding the widespread use of robots.
Analyzing Nonresponse in Longitudinal Surveys Using Bayesian Additive Regression Trees: A Nonparametric Event History Analysis
In: Social science computer review: SSCORE, Volume 40, Issue 3, p. 678-699
ISSN: 1552-8286
Increasing nonresponse rates is a pressing issue for many longitudinal panel studies. Respondents frequently either refuse participation in single survey waves (temporary dropout) or discontinue participation altogether (permanent dropout). Contemporary statistical methods that are used to elucidate predictors of survey nonresponse are typically limited to small variable sets and ignore complex interaction patterns. The innovative approach of Bayesian additive regression trees (BART) is an elegant way to overcome these limitations because it does not specify a parametric form for the relationship between the outcome and its predictors. We present a BART event history analysis that allows identifying predictors for different types of nonresponse to anticipate response rates for upcoming survey waves. We apply our novel method to data from the German National Educational Panel Study including N = 4,559 students in Grade 5 that observed nonresponse rates of up to 36% across five waves. A cross-validation and comparison with logistic regression models with least absolute shrinkage and selection operator penalization underline the advantages of the approach. Our results highlight the potential of Bayesian discrete-time event modeling for the long-term projection of panel stability across multiple survey waves. Finally, potential applications of this approach for operational use in survey management are outlined.
Examining Individual Differences in Interpersonal Influence: On the Psychometric Properties of the Generalized Opinion Leadership Scale (GOLS)
In: The journal of psychology: interdisciplinary and applied, Volume 150, Issue 1, p. 88-101
ISSN: 1940-1019
Personality traits and knowledge sharing in online communities
In: Computers in human behavior, Volume 29, Issue 1, p. 210-216
ISSN: 0747-5632
Studying item-effect variables and their correlation patterns with multi-construct multi-state models
In: PLOS ONE, Volume 18, Issue 8
Method effects on the item level can be modeled as latent difference variables in longitudinal data. These item-effect variables represent interindividual differences associated with responses to a specific item when assessing a common construct with multi-item scales. In latent variable analyses, their inclusion substantially improves model fits in comparison to classical unidimensional measurement models. More importantly, covariations between different item-effect variables and with other constructs can provide valuable insights, for example, into the structure of the studied instrument or the response process. Therefore, we introduce a multi-construct multi-state model with item-effect variables for systematic investigations of these correlation patterns within and between constructs. The implementation of this model is demonstrated using a sample of N = 2,529 Dutch respondents that provided measures of life satisfaction and positive affect at five measurement occasions. Our results confirm non-negligible item effects in two ostensibly unidimensional scales, indicating the importance of modeling interindividual differences on the item level. The correlation pattern between constructs indicated rather specific effects for individual items and no common causes, but the correlations within a construct align with the item content and support a substantive meaning. These analyses exemplify how multi-construct multi-state models allow the systematic examination of item effects to improve substantive and psychometric research.
Disentangling Interviewer and Area Effects in Large-Scale Educational Assessments using Cross-Classified Multilevel Item Response Models
In: Journal of survey statistics and methodology: JSSAM, Volume 9, Issue 4, p. 722-744
ISSN: 2325-0992
Abstract
In large-scale educational assessments, interviewers should ensure standardized settings for all participants. However, in practice many interviewers do not strictly adhere to standardized field protocols. Therefore, systematic interviewer effects for the measurement of mathematical competence were examined in a representative sample of N = 5,139 German adults. To account for interviewers working in specific geographical regions, interviewer and area effects were disentangled using cross-classified multilevel item response models. These analyses showed that interviewer behavior distorted competence measurements, whereas regional effects were negligible. On a more general note, it is demonstrated how to identify conspicuous interviewer behavior with Bayesian multilevel models.
Data from the National Educational Panel Study (NEPS) in Germany: Educational Pathways of Students in Grade 5 and Higher
In: Journal of Open Psychology Data, Volume 11, p. 1-14
The German National Educational Panel Study investigates individual competences and educational trajectories in a longitudinal multi-cohort study design. The third of the six starting cohorts focuses on paths through lower into upper secondary level and beyond. The representative sample includes about N = 6,112 students from fifth grade attending regular or special schools. Students were initially sampled in 2010 and, subsequently, received follow-up interviews and competence tests each year resulting, as of yet, in a total of 12 measurement waves. Additionally, important context persons (i.e., parents, teachers, and heads of schools) were included in the assessment design. The longitudinal nature of the data provides rich opportunities for research on the development of competences through secondary school.
Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty
In: Proceedings of the National Academy of Sciences of the United States of America (PNAS), Volume 119, Issue 44, p. 1-8
This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.
Many Labs 2: Investigating Variation in Replicability Across Samples and Settings
In: Klein , R A , Vianello , M , Hasselman , F , Adams , B G , Adams , R B , Alper , S , Aveyard , M , Axt , J R , Babalola , M T , Bahník , Š , Batra , R , Berkics , M , Bernstein , M J , Berry , D R , Bialobrzeska , O , Binan , E D , Bocian , K , Brandt , M J , Busching , R , Rédei , A C , Cai , H , Cambier , F , Cantarero , K , Carmichael , C L , Ceric , F , Chandler , J , Chang , J-H , Chatard , A , Chen , E E , Cheong , W , Cicero , D C , Coen , S , Coleman , J A , Collisson , B , Conway , M A , Corker , K S , Curran , P G , Cushman , F , Dagona , Z K , Dalgar , I , Dalla Rosa , A , Davis , W E , de Bruijn , M , De Schutter , L , Devos , T , de Vries , M , Doğulu , C , Dozo , N , Dukes , K N , Dunham , Y , Durrheim , K , Ebersole , C R , Edlund , J E , Eller , A , English , A S , Finck , C , Frankowska , N , Freyre , M , Friedman , M , Galliani , E M , Gandi , J C , Ghoshal , T , Giessner , S R , Gill , T , Gnambs , T , Gómez , Á , González , R , Graham , J , Grahe , J E , Grahek , I , Green , E G T , Hai , K , Haigh , M , Haines , E L , Hall , M P , Heffernan , M E , Hicks , J A , Houdek , P , Huntsinger , J R , Huynh , H P , Ijzerman , H , Inbar , Y , Innes-ker , Å H , Jiménez-leal , W , John , M , Joy-gaba , J A , Kamiloğlu , R G , Kappes , H B , Karabati , S , Karick , H , Keller , V N , Kende , A , Kervyn , N , Knežević , G , Kovacs , C , Krueger , L E , Kurapov , G , Kurtz , J , Lakens , D , Lazarević , L B , Levitan , C A , Lewis , N A , Lins , S , Lipsey , N P , Losee , J E , Maassen , E , Maitner , A T , Malingumu , W , Mallett , R K , Marotta , S A , Međedović , J , Mena-pacheco , F , Milfont , T L , Morris , W L , Murphy , S C , Myachykov , A , Neave , N , Neijenhuijs , K , Nelson , A J , Neto , F , Lee Nichols , A , Ocampo , A , O'donnell , S L , Oikawa , H , Oikawa , M , Ong , E , Orosz , G , Osowiecka , M , Packard , G , Pérez-sánchez , R , Petrović , B , Pilati , R , Pinter , B , Podesta , L , Pogge , G , Pollmann , M M H , Rutchick , A M , Saavedra , P , Saeri , A K , Salomon , E , Schmidt , K , Schönbrodt , F D , Sekerdej , M B , Sirlopú , D , Skorinko , J L M , Smith , M A , Smith-castro , V , Smolders , K C H J , Sobkow , A , Sowden , W , Spachtholz , P , Srivastava , M , Steiner , T G , Stouten , J , Street , C N H , Sundfelt , O K , Szeto , S , Szumowska , E , Tang , A C W , Tanzer , N , Tear , M J , Theriault , J , Thomae , M , Torres , D , Traczyk , J , Tybur , J M , Ujhelyi , A , Van Aert , R C M , Van Assen , M A L M , Van Der Hulst , M , Van Lange , P A M , Van 't Veer , A E , Vásquez- Echeverría , A , Ann Vaughn , L , Vázquez , A , Vega , L D , Verniers , C , Verschoor , M , Voermans , I P J , Vranka , M A , Welch , C , Wichman , A L , Williams , L A , Wood , M , Woodzicka , J A , Wronska , M K , Young , L , Zelenski , J M , Zhijia , Z & Nosek , B A 2018 , ' Many Labs 2: Investigating Variation in Replicability Across Samples and Settings ' , Advances in Methods and Practices in Psychological Science , vol. 1 , no. 4 , pp. 443-490 . https://doi.org/10.1177/2515245918810225
We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen's ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.
BASE