Predictive validity of my class inventory
In: Studies in educational evaluation, Band 8, Heft 2, S. 129-140
ISSN: 0191-491X
782 Ergebnisse
Sortierung:
In: Studies in educational evaluation, Band 8, Heft 2, S. 129-140
ISSN: 0191-491X
In: Journal of marketing theory and practice: JMTP, Band 11, Heft 1, S. 28-35
ISSN: 1944-7175
In: Review of public personnel administration, Band 11, Heft 1-2, S. 139-141
ISSN: 1552-759X
The comments below and in the note that follows it discuss concerns about the situational interview as a job selection tool. Both comments address the article "Improving Selection Outcomes with the Use of Situational Interviews," written by Mary K. Stohr-Gillmore, Michael Stohr-Gillmore and Nannette Kistler which was published in the Spring 1990 issue of ROPPA.
In: Journal of vocational behavior, Band 20, Heft 2, S. 180-192
ISSN: 1095-9084
In: The economic journal: the journal of the Royal Economic Society, Band 112, Heft 482, S. 966-985
ISSN: 1468-0297
In: Journal of vocational behavior, Band 74, Heft 1, S. 128-133
ISSN: 1095-9084
In: Journal of employment counseling, Band 11, Heft 2, S. 89-95
ISSN: 2161-1920
Predictive ratings of trainees in a skill training setting by their instructors are compared and contrasted with first‐line supervisory ratings of the same trainees in an employment setting. An ethnic subgroup analysis suggests some possible differences in terms of range and variations of prediction. Implications are presented for further consideration.
In: Social psychology, Band 41, Heft 1, S. 27-34
ISSN: 2151-2590
This study examines the extent to which implicit measures of aggressiveness predict actual aggressive behavior in response to provocation. Participants (n = 77) completed implicit measures of aggressiveness, were or were not exposed to insult from an experimenter, evaluated the performance of the experimenter (i.e., opportunity for aggressive behavior), and completed explicit measures of aggressiveness. Results showed that the implicit measure of aggressiveness significantly predicted aggressive behavior in response to provocation, whereas it was not predictive when there was no provocation. The discussion deals with the validity of implicit measures as predictors of aggressive behavior and their moderators.
In: Probation journal: the journal of community and criminal justice, Band 56, Heft 3, S. 284-284
ISSN: 1741-3079
In: Leisure sciences: an interdisciplinary journal, Band 21, Heft 1, S. 17-30
ISSN: 1521-0588
In: Sexual abuse: official journal of the Association for the Treatment of Sexual Abusers (ATSA), Band 32, Heft 2, S. 203-219
ISSN: 1573-286X
This study evaluated the validity of the Static-99 and Static-99R in assessing sexual recidivism in Switzerland, based on a sample of 142 male sex offenders. Both tools showed predictive validity, but the Static-99R had better discrimination (OR = 1.82, AUC = .81) and calibration (Brier = .078, P/E = 0.96) than the Static-99. A cut score of four on the Static-99R maximized sensitivity (92.9%) and specificity (60.2%). However, although most offenders (98.7%) with a score < 4 did not commit sexual offenses in the 5-year follow-up period, only one in five (20.3%) offenders with a score ≥ 4 actually recidivated. Furthermore, the predicted number of recidivists in the well above average risk category (Static-99R ≥ 6) was 24% higher than expected in routine samples. The results suggest that the Static-99R may be a useful screening tool to identify low-risk individuals but offenders with scores ≥ 4 should be subjected to a more thorough assessment.
In: The prison journal: the official publication of the Pennsylvania Prison Society, Band 87, Heft 2, S. 211-226
ISSN: 1552-7522
The construct validity of the Psychological Inventory of Criminal Thinking Styles's Fear-of-Change scale (FOC) was evaluated by correlating it with scales from the Personality Assessment Inventory (PAI). As expected, the FOC displayed a positive association with the PAI Anxiety (ANX) scale and negative associations with the PAI Positive Impression Management and Treatment Rejection scales among 136 male inmates. The FOC also forged a stronger relationship with the ANX's Cognitive subscale than with the Affective or Physiological subscales. It was further surmised that a subgroup of 54 participants retested after 3 months of program participation witnessed significant FOC gains. An independent sample of 239 medium-security male inmates participated in a second study in which significant FOC accretions were found in offenders who discontinued programming but not in offenders who continued or graduated. A third study conducted on 46 nonprogram participants effectively ruled out the alternative hypothesis that FOC scores rise in nonprogram offenders.
In: Child abuse & neglect: the international journal ; official journal of the International Society for the Prevention of Child Abuse and Neglect, Band 29, Heft 1, S. 59-80
ISSN: 1873-7757
In: Behaviormetrika
ISSN: 1349-6964
AbstractIn a psychometric analysis of a new psychological test, we often assess the predictive validity of a new target test over and above a baseline test, known as the incremental predictive validity. Usually, the incremental predictive validity is evaluated using within-sample statistics. Recently, it was argued to use out-of-sample assessment to prevent overfitting and non-replicable findings. In this paper, we elaborate on how to assess incremental predictive validity out-of-sample. In such an approach, we estimate prediction rules in one sample, and evaluate incremental predictive validity in another sample. Using a simulation study, we investigate whether an out-of-sample assessment results in different findings than a within-sample evaluation, taking into account the reliability of the baseline and a target test, and other factors (i.e., sample size). Results show that there is a difference between the in-sample and out-of-sample assessment, especially in small samples. However, the reliability of the two tests has no influence on this difference. In addition, we explore the effects of ridge estimation, ordinary least squares, and SIMEX, three different methods for estimating a prediction rule, on incremental predictive validity. The results show that using SIMEX leads to a bad assessment of incremental predictive validity. Ordinary least squares and ridge estimation result in almost the same incremental predictive validity estimates with a little advantage for ridge regression. In an empirical application, we show how to assess incremental predictive validity in practice and we compare that to the usual assessment.
In: Journal of vocational behavior, Band 91, S. 170-179
ISSN: 1095-9084