Preface -- 1. Introduction to statistical hypothesis testing -- 2. Z-test for one-sample mean -- 3. t-test for one-sample mean -- 4. Z-test for one sample proportion -- 5. Chi-square test for one sample variance -- 6. The observed significance level (P-value) procedure -- 7. Interval estimation for one population -- 8. The interval estimation procedure: hypothesis testing for one population -- 9. Hypothesis testing for the difference between two populations -- 10. Interval estimation for the difference between two populations -- 11. The interval estimation procedure: hypothesis testing for two populations -- Appendix -- Index.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Scholars in comparative politics and international relations routinely evaluate causal hypotheses by referring tocounterfactual caseswhere a hypothesized causal factor is supposed to have been absent. The methodological status and the viability of this very common procedure are unclear and are worth examining. How does the strategy of counterfactual argument relate, if at all, to methods of hypothesis testing based on the comparison of actual cases, such as regression analysis or Mill's Method of Difference? Are counterfactual thought experiments a viable means of assessing hypotheses about national and international outcomes, or are they methodologically invalid in principle? The paper addresses the first question in some detail and begins discussion of the second. Examples from work on the causes of World War I, the nonoccurrence of World War III, social revolutions, the breakdown of democratic regimes in Latin America, and the origins of fascism and corporatism in Europe illustrate the use, problems and potential of counterfactual argument in small-N-oriented political science research.
AbstractWe introduce an expected-value theory of linguistic modality that makes reference to expected utility and a likelihood-based confirmation measure for deontics and epistemics, respectively. The account is a probabilistic semantics for deontics and epistemics, yet it proposes that deontics and epistemics share a common core modal semantics, as in traditional possible-worlds analysis of modality. We argue that this account is not only theoretically advantageous, but also has far-reaching empirical consequences. In particular, we predict modal versions of reasoning fallacies from the heuristics and biases literature. Additionally, we derive the modal semantics in an entirely transparent manner, as it is based on the compositional semantics of Korean modal expressions that are morphosyntactically decomposed into a conditional and an evaluative predicate.
In: Doctoratus Philosophia, DPhil Thesis, Trinity College Library, Dublin. Ch. 3, Ch.4, and Ch.5 have been published in the public domain in 2004, 2005, 2006
AbstractGrant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$, to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.
In: State politics & policy quarterly: the official journal of the State Politics and Policy section of the American Political Science Association, Band 13, Heft 2, S. 225-243
AbstractScholars of social networks often rely on summary statistics to measure and compare the structures of their networks of interest. However, measuring the uncertainty inherent in these summaries can be challenging, thus making hypothesis testing for network summaries difficult. Computational and nonparametric procedures can overcome these difficulties by allowing researchers to generate reference distributions for comparison directly from their data. In this research, I demonstrate the use of nonparametric hypothesis testing in networks using the popular network summary statistic network modularity. I provide a method based on permutation testing for assessing whether a particular network modularity score is larger than a researcher might expect due to random chance. I then create a simulation study of network modularity and its simulated reference distribution that I propose. Finally, I provide an empirical example of this technique using cosponsorship networks from U.S. state legislatures.
1.HYPOTHESIS TESTING AND MODEL SELECTION p. 1 --1.1Introduction, p.1 --1.2Standard Procedure of Hypothesis Testing, p.4 --1.3Model Selection, p.9 --1.4Purpose and Plan of the Book, p.10 --2.HYPOTHESIS TESTING: CRITICISMS AND ALTERNATIVES p.12 --2.1Hypothesis Testing ad Its Discontents, p.12 --2.2Uses of Hypothesis Tests, p.13 --2.3Criticisms of Conventional Hypothesis Testing, p.17 --2.4Implications of the Criticisms, p.26 --2.5Alternatives to Conventional Tests, p.27 --2.6Examples, p.29 --2.7Summary and Conclusions, p.41 --Recommended Reading, p.42 --3.THE CLASSICAL APPROACH p.43 --3.1Random Sampling and Classical Tests, p.43 --3.2Two Approaches to Hypothesis Tests, p.46 --3.3Confidence Intervals, p.51 --3.4Choosing a Significance Level, p.53 --3.5Comparison to Conventional Practice, p.61 --3.6Implications of Choosing an a Level, p.63 --3.7Other Kinds of Errors, p.65 --3.8Example of Choosing an a Level, p.67 --3.9Evaluation of Criticisms, p.68 --3.10Summary and Conclusions, p.72 --Recommended Reading, p.74 --4.BAYESIAN HYPOTHESIS TESTS p.75 --4.1Bayes's Theorem, p.75 --4.2Bayesian Estimation, p.77 --4.3Bayes Factors, p.80 --4.4Bayesian Confidence Intervals and Bayes Factors, p.88 --4.5Approaches to Bayesian Hypothesis Testing, p.91 --4.6The Unit Information Prior, p.92 --4.7Limits on bayes Factors, p.97 --4.8Bayes Factors for Multiple Parameters, p.102 --4.9Summary and Conclusions, p.105Recommended Reading, p.106
Verfügbarkeit an Ihrem Standort wird überprüft
Dieses Buch ist auch in Ihrer Bibliothek verfügbar: