While the evolving nature and proliferation of UN peacekeeping operations in the post-Cold War period is well documented, we know less about how personnel are recruited for these missions. Furthermore, recent developments have rendered existing supply-side explanations for troop contributions less convincing. The increasing demand for personnel, along with stagnant UN reimbursement rates and the rising costs of participation that began during the 1990s, mean that it is less attractive than ever for developing countries to offer their own troops to what have become increasingly ambitious operations. Yet, we see a large pool of developing countries continuing to do so. To address this puzzle, we argue that UN member states with strong preferences for establishing peacekeeping missions have begun using foreign aid as an inducement to help potential contributors overcome the collective action problem inherent in multilateral peacekeeping operations. We uncover strong empirical evidence that these 'pivotal states' strategically allocate foreign aid to persuade contributing states to boost their contributions, and also to ensure that these missions continue to be staffed and maintained as costs rise, particularly during the post-1999 period. We also find that states are responsive to these financial inducements: foreign aid increases both the likelihood of contributing personnel and the size of a state's contribution. Theoretically, this article advances the scholarly understanding of international organizations and cooperation by illuminating an informal, extra-organizational strategy by which IOs can facilitate cooperation.
Due in large part to the proliferation of digitized text, much of it available for little or no cost from the Internet, political science research has experienced a substantial increase in the number of data sets and large-n research initiatives. As the ability to collect detailed information on events of interest expands, so does the need to efficiently sort through the volumes of available information. Automated document classification presents a particularly attractive methodology for accomplishing this task. It is efficient, widely applicable to a variety of data collection efforts, and considerably flexible in tailoring its application for specific research needs. This article offers a holistic review of the application of automated document classification for data collection in political science research by discussing the process in its entirety. We argue that the application of a two-stage support vector machine (SVM) classification process offers advantages over other well-known alternatives, due to the nature of SVMs being a discriminative classifier and having the ability to effectively address two primary attributes of textual data: high dimensionality and extreme sparseness. Evidence for this claim is presented through a discussion of the efficiency gains derived from using automated document classification on the Militarized Interstate Dispute 4 (MID4) data collection project.
Abstract In their latest research note in our ongoing dialogue about the Militarized Interstate Dispute (MID) dataset, Gibler, Miller, and Little (GML) reiterate their concerns about a high error rate in the MID dataset and challenge our prior replication studies, which showed that GML's recommended changes to the dataset caused few substantive changes in the core findings of two published studies. In this rejoinder, we offer further explanation of our decision-making, put GML's claims about the error rate in context, defend our replication methodology and results, and offer advice for MID dataset users.
Abstract In a recent article, Gibler, Miller, and Little (2016) (GML) conduct an extensive review of the Militarized Interstate Dispute (MID) data between the years 1816 and 2001, highlighting possible inaccuracies and recommending a substantial number of changes to the data. They contend that, in several instances, analyses with their revised data lead to substantively different inferences. Here, we review GML's MID drop and merge recommendations and reevaluate the substantive impact of their changes. We are in agreement with about 76 percent of the recommended drops and merges. However, we find that some of the purported overturned findings in GML's replications are not due to their data, but rather to the strategies they employ for replication. We reexamine these findings and conclude that the remaining differences in inference stemming from the variations in the MID data are rare and modest in scope.