Increasing student access through aid: Differences in difference-in-differences estimates
In: Economics letters, Band 238, S. 111711
ISSN: 0165-1765
12 Ergebnisse
Sortierung:
In: Economics letters, Band 238, S. 111711
ISSN: 0165-1765
SSRN
SSRN
In: The economic journal: the journal of the Royal Economic Society, Band 134, Heft 659, S. 985-1018
ISSN: 1468-0297
Abstract
This paper examines the relationship between p-hacking, publication bias and data-sharing policies. We collect 38,876 test statistics from 1,106 articles published in leading economic journals between 2002–20. We find that, while data-sharing policies increase the provision of data, they do not decrease the extent of p-hacking and publication bias. Similarly, articles that use hard-to-access administrative data or third-party surveys, as compared to those that use easier-to-access (e.g., author-collected) data, are not different in their p-hacking and publication extent. Voluntary provision of data by authors on their home pages offers no evidence of reduced p-hacking.
In: American economic review, Band 112, Heft 9, S. 3137-3139
ISSN: 1944-7981
In Brodeur, Cook, and Heyes (2020) we present evidence that instrumental variable (and to a lesser extent difference-in-difference) articles are more p-hacked than randomized controlled trial and regression discontinuity design articles. We also find no evidence that (i) articles published in the top five journals are different; (ii) the "revise and resubmit" process mitigates the problem; (iii) things are improving through time. Kranz and Pütz (2022) apply a novel adjustment to address rounding errors. They successfully replicate our results with the exception of our shakiest finding: after adjusting for rounding errors, bunching of test statistics for difference-in-difference articles is now smaller around the 5 percent level (and coincidentally larger at the 10 percent level). (JEL A14, C12, C52)
In: American economic review, Band 110, Heft 11, S. 3634-3660
ISSN: 1944-7981
The credibility revolution in economics has promoted causal identification using randomized control trials (RCT), difference-in-differences (DID), instrumental variables (IV) and regression discontinuity design (RDD). Applying multiple approaches to over 21,000 hypothesis tests published in 25 leading economics journals, we find that the extent of p-hacking and publication bias varies greatly by method. IV (and to a lesser extent DID) are particularly problematic. We find no evidence that (i) papers published in the Top 5 journals are different to others; (ii) the journal "revise and resubmit" process mitigates the problem; (iii) things are improving through time. (JEL A14, C12, C52)
In: IZA Discussion Paper No. 15586
SSRN
In: IZA Discussion Paper No. 13255
SSRN
Working paper
In: IZA Discussion Paper No. 16912
SSRN
In: IZA Discussion Paper No. 11796
SSRN
In: Journal of political economy microeconomics
ISSN: 2832-9368
SSRN