In the literature on electoral politics full convergence of policy platforms is usually regarded as socially optimal. The reason is that risk-averse voters prefer a sure middle-of-the-road policy to a lottery of two extremes with the same expectation. In this paper we study the normative implications of convergence in a simple model of electoral competition, in which parties are uncertain about voters' preferences. We show that if political parties have incomplete information about voters' preferences, the voters may prefer some degree of policy divergence. The intuition is that policy divergence enables voters to correct policies that are based on a wrong perception of their desires.
Among the many lawsuits filed during the unprecedented California gubernatorial recall election, the one that garnered the most attention was the case challenging the use of punch card voting in six California counties. In that suit, plaintiffs argued that the election should be delayed because the allegedly higher error rates of punch card voting compared to other systems in California denied those voters in punch card counties their equal protection rights under the California constitution and also violated the Voting Rights Act. Professor Hasen argues that the original Ninth Circuit decision delaying the election until punch card voting could be eliminated was correct, and the later en banc decision allowing the election to go forward with the selective use of punch cards was unfortunate, though understandable given the closeness of the recall election. Hasen argues that under the Supreme Court's decision in Bush v. Gore, the selective use of punch card voting constitutes an equal protection violation. The en banc court did not reach the issue. Hasen concludes that the en banc decision does not preclude other plaintiffs from bringing similar challenges in the future, and that the window remains open for voting reform lawsuits until the Supreme Court interprets Bush v. Gore otherwise.
Examines poll reporting on the television evening news during the US presidential election, focusing on the reporting of sampling error and the correct use of this information to interpret and present poll results.
Differences between epistemic & procedural approaches toward democracy are studied to determine whether proceduralist notions of democracy pose epistemic difficulties for existing social decision rules. An overview of the different forms of epistemic & procedural perspectives toward democracy is presented. After reviewing the Marquis de Condorcet's (1785) understanding of the jury system, the question of whether certain social decision rules are better at tracking truths than others is examined. Several hypothetical cases of plurality voting that involve different numbers of voting options are subsequently analyzed. The analysis revealed that all systems of social decision rules were epistemically sound; moreover, all systems were identified as possessing epistemic value, especially when voters demonstrated reliability in selecting the correct outcome & when the electorate itself is large. It is concluded that both epistemic & procedural forms of democracy possess epistemic merit. 4 Tables, 3 Appendixes, 59 References. J. W. Parker
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Volume 11, Issue 3, p. 275-288
This article builds a nonparametric method for inference from roll-call cohesion scores. Cohesion scores have been a staple of legislative studies since the publication of Rice's 1924 thesis. Unfortunately, little effort has been dedicated to understanding their statistical properties or relating them to existing models of legislative behavior. I show how a common use of cohesion scores, testing for distinct voting blocs, is severely biased toward Type I error, practically guaranteeing significant findings even when the null hypothesis is correct. I offer a nonparametric method—permutation analysis—that solves the bias problem and provides for simple and intuitive inference. I demonstrate with an examination of roll-call voting data from the Brazilian National Congress.
Under the assumptions of the standard Condorcet Jury Theorem, majority verdicts are virtually certain to be correct if the competence of voters is greater than one-half, and virtually certain to be incorrect if voter competence is less than one-half. But which is the case? Here we turn the Jury Theorem on its head, to provide one way of addressing that question. The same logic implies that, if the outcome saw 60 percent of voters supporting one proposition and 40 percent the other, then average voter competence must either be 0.60 or 0.40. We still have to decide which, but limiting the choice to those two values is a considerable aid in that.
This paper analyzes the extent to which voter behavior in city formation elections supports Tiebout's (1956) hypothesis that residential sorting facilitates efficiency of local service provision. It develops a two-stage model of city formation to distinguish agenda setting from voter outcomes on city formation proposals. Logit analysis is used to analyze voting in 71 city formation elections, incorporating Heckman's two-stage procedure to correct for self-selection of local referenda. Community fiscal & demographic factors influence agenda setting more than voting behavior. Wealthier communities in high-growth counties are more likely to propose formation of a city. In contrast, community characteristics have little influence on electoral outcomes, suggesting that boundedly rational voters rely on information heuristics. Although reduction of diversity did not appear to motivate city formation, sorting around residential income, land use preferences, & other demographic variables may facilitate relative efficiency of service provision. 4 Tables, 39 References. Adapted from the source document.
This paper corrects a long-standing error in elementary geometrical constructions that involve collective choices in multidimensional settings. The seemingly innocuous assumption of separability among arguments in individual utility functions does not imply symmetric indifference contours in shared goods space. Shared goods necessarily become gross substitutes when resource or budgetary constraints are introduced. The corrected construction suggests that issue-by-issue voting is less efficacious than is indicated in the conventional analysis. 5 Figures, 1 Appendix, 15 References. Adapted from the source document.
In a recent paper, Banks (2000), adopting the framework of our model (Groseclose & Snyder 1996), derives several new & noteworthy results. In addition, he provides a counterexample to our proposition 4. Here we explain the error in our proposition but note that we can correct it easily if we invoke an additional assumption: In equilibrium the winning vote buyer constructs a nonflooded coalition, that is, she does not bribe every member of her coalition. We conclude with a brief discussion of the substantive implications of Banks's proposition 1; we note that it provides additional support for our general claim that minimal winning coalitions should be rare in a vote-buying game. 3 References. Adapted from the source document.
Lambastes Jeffrey M. Stonecash's "Reconsidering the Trend in Incumbent Vote Percentages in House Elections" (2003). It is contended that his alternative measure of the average incumbent's vote share, designed to correct a distortion generated by omitting uncontested incumbents, creates a far more serious distortion. Rerunning Stonecash's analysis leads to results that contradict his central argument & suggest that there is no need to abandon the prevailing view that incumbent vote margins have increased since 1946. Previous research on the rising incumbency advantage is noted, & a particular inaccuracy in Stonecash's article related to David Mayhew's (1974) assertions identified. 1 Figure, 11 References. J. Zendejas
I develop a model of decision making in juries when there is uncertainty about jurors' preferences. I provide a characterization of the equilibrium strategy under any voting rule and show that nonunanimous rules are asymptotically efficient. Specifically, large juries make the correct decision with probability close to one. My analysis also demonstrates that under the unanimous rule, large juries almost never convict the defendant. The last result contrasts markedly with the literature and suggests that the unanimity rule can protect the innocent only at the price of acquitting the guilty.