Researchers face a tradeoff when applying latent variable models to time-series, cross-sectional data. Static models minimize bias but assume data are temporally independent, resulting in a loss of efficiency. Dynamic models explicitly model temporal data structures, but smooth estimates of the latent trait across time, resulting in bias when the latent trait changes rapidly. We address this tradeoff by investigating a new approach for modeling and evaluating latent variable estimates: a robust dynamic model. The robust model is capable of minimizing bias and accommodating volatile changes in the latent trait. Simulations demonstrate that the robust model outperforms other models when the underlying latent trait is subject to rapid change, and is equivalent to the dynamic model in the absence of volatility. We reproduce latent estimates from studies of judicial ideology and democracy. For judicial ideology, the robust model uncovers shocks in judicial voting patterns that were not previously identified in the dynamic model. For democracy, the robust model provides more precise estimates of sudden institutional changes such as the imposition of martial law in the Philippines (1972–1981) and the short-lived Saur Revolution in Afghanistan (1978). Overall, the robust model is a useful alternative to the standard dynamic model for modeling latent traits that change rapidly over time.
In a recent article, Gibler, Miller, and Little (2016) (GML) conduct an extensive review of the Militarized Interstate Dispute (MID) data between the years 1816 and 2001, highlighting possible inaccuracies and recommending a substantial number of changes to the data. They contend that, in several instances, analyses with their revised data lead to substantively different inferences. Here, we review GML's MID drop and merge recommendations and reevaluate the substantive impact of their changes. We are in agreement with about 76 percent of the recommended drops and merges. However, we find that some of the purported overturned findings in GML's replications are not due to their data, but rather to the strategies they employ for replication. We reexamine these findings and conclude that the remaining differences in inference stemming from the variations in the MID data are rare and modest in scope.