On Algorithms for Restricted Maximum Likelihood Estimation
In: Mathematics Preprint Archive Vol. 2002, Issue 11, pp 124-156
41 Ergebnisse
Sortierung:
In: Mathematics Preprint Archive Vol. 2002, Issue 11, pp 124-156
SSRN
Working paper
In: Journal of applied mathematics & decision sciences: JAMDS, Band 2006, S. 1-13
ISSN: 1532-7612
We discuss alternative approaches for estimating from cross-sectional categorical data in the presence of
misclassification. Two parameterisations of the misclassification model are reviewed. The first employs misclassification
probabilities and leads to moment-based inference. The second employs calibration probabilities and leads to maximum likelihood inference. We show that maximum likelihood estimation can be alternatively performed by employing misclassification probabilities and a missing data specification. As an alternative to maximum likelihood estimation we propose a quasi-likelihood parameterisation of the misclassification model. In this context an explicit definition of the likelihood function is avoided and a different way of resolving a missing data problem is provided.
Variance estimation for the alternative point estimators is considered. The different approaches are illustrated using real data from the UK Labour Force Survey and simulated data.
SSRN
In: Structural equation modeling: a multidisciplinary journal, Band 10, Heft 3, S. 352-379
ISSN: 1532-8007
In: Risk analysis: an international journal, Band 24, Heft 5, S. 1143-1151
ISSN: 1539-6924
The dose to human and nonhuman individuals inflicted by anthropogenic radiation is an important issue in international and domestic policy. The current paradigm for nonhuman populations asserts that if the dose to the maximally exposed individuals in a population is below a certain criterion (e.g., <10 mGy d−1) then the population is adequately protected. Currently, there is no consensus in the regulatory community as to the best statistical approach. Statistics, currently considered, include the maximum likelihood estimator for the 95th percentile of the sample mean and the sample maximum. Recently, the investigators have proposed the use of the maximum likelihood estimate of a very high quantile as an estimate of dose to the maximally exposed individual. In this study, we compare all of the above‐mentioned statistics to an estimate based on extreme value theory. To determine and compare the bias and variance of these statistics, we use Monte Carlo simulation techniques, in a procedure similar to a parametric bootstrap. Our results show that a statistic based on extreme value theory has the least bias of those considered here, but requires reliable estimates of the population size. We recommend establishing the criterion based on what would be considered acceptable if only a small percentage of the population exceeded the limit, and hence recommend using the maximum likelihood estimator of a high quantile in the case that reliable estimates of the population size are not available.
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 58, Heft 4, S. 440-465
ISSN: 1467-9574
The notion of cointegration has led to a renewed interest in the identification and estimation of structural relations among economic time series. This paper reviews the different approaches that have been put forward in the literature for identifying cointegrating relationships and imposing (possibly over‐identifying) restrictions on them. Next, various algorithms to obtain (approximate) maximum likelihood estimates and likelihood ratio statistics are reviewed, with an emphasis on so‐called switching algorithms. The implementation of these algorithms is discussed and illustrated using an empirical example.
In: Twin research and human genetics: the official journal of the International Society for Twin Studies (ISTS) and the Human Genetics Society of Australasia, Band 9, Heft 3, S. 377-392
ISSN: 1839-2628
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 56, Heft 1, S. 2-22
ISSN: 1467-9574
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 58, Heft 2, S. 197-219
ISSN: 1467-9574
For a multilevel model with two levels and only a random intercept, the quality of different estimators of the random intercept is examined. Analytical results are given for the marginal model interpretation where negative estimates of the variance components are allowed for. Except for four or five level‐2 units, the Empirical Bayes Estimator (EBE) has a lower average Bayes risk than the Ordinary Least Squares Estimator (OLSE). The EBEs based on restricted maximum likelihood (REML) estimators of the variance components have a lower Bayes risk than the EBEs based on maximum likelihood (ML) estimators. For the hierarchical model interpretation, where estimates of the variance components are restricted being positive, Monte Carlo simulations were done. In this case the EBE has a lower average Bayes risk than the OLSE, also for four or five level‐2 units. For large numbers of level‐1 (30) or level‐2 units (100), the performances of REML‐based and ML‐based EBEs are comparable. For small numbers of level‐1 (10) and level‐2 units (25), the REML‐based EBEs have a lower Bayes risk than ML‐based EBEs only for high intraclass correlations (0.5).
In: Twin research and human genetics: the official journal of the International Society for Twin Studies (ISTS) and the Human Genetics Society of Australasia, Band 8, Heft 3, S. 232-244
ISSN: 1839-2628
AbstractThe role of co-twin dependence (twins' closeness or reliance on the co-twin) was examined as a moderator of genetic and environmental influences on alcohol use in adolescence and early adulthood in a large longitudinal population-based study of Finnish twins (FinnTwin16). The associations between co-twin dependence and alcohol use were studied first at an individual level in adolescence (n = 3362) and early adulthood (n = 2912). Then, maximum likelihood models were fit to the two waves of data from same-sex twin pairs to assess the differences and changes in genetic and environmental influences on alcohol use (abstinence, drinking frequency, intoxication frequency); N = 1342 pairs in adolescence, and N = 1078 pairs in early adulthood. Overall, no significant associations were found between co-twin dependence and individual alcohol use. However, co-twin dependence importantly modulated genetic effects on drinking habits, especially in adolescence, but also in early adulthood. Co-twin–dependent twins reported greater similarity in their alcohol-related behavior across all alcohol-use measures at both time points, and the role of genes and environments varied according to co-twin dependence. Shared environmental factors explained most of the variation in drinking among co- twin–dependent twins in adolescence and contributed to drinking to intoxication during early adulthood. In contrast, among co-twin–independent twin pairs, genetic variance contributed significantly to all alcohol-use measures at both time-points. An interdependent sibling relationship is an important modifier of drinking habits, and it appears to reduce the impact of inherited liabilities on alcohol-related behavior especially in adolescence.
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 13, Heft 4, S. 301-326
ISSN: 1476-4989
Nearly all hierarchical linear models presented to political science audiences are estimated using maximum likelihood under a repeated sampling interpretation of the results of hypothesis tests. Maximum likelihood estimators have excellent asymptotic properties but less than ideal small sample properties. Multilevel models common in political science have relatively large samples of units like individuals nested within relatively small samples of units like countries. Often these level-2 samples will be so small as to make inference about level-2 effects uninterpretable in the likelihood framework from which they were estimated. When analysts do not have enough data to make a compelling argument for repeated sampling based probabilistic inference, we show how visualization can be a useful way of allowing scientific progress to continue despite lack of fit between research design and asymptotic properties of maximum likelihood estimators.Somewhere along the line in the teaching of statistics in the social sciences, the importance of good judgment got lost amid the minutiae of null hypothesis testing. It is all right, indeed essential, to argue flexibly and in detail for a particular case when you use statistics. Data analysis should not be pointlessly formal. It should make an interesting claim; it should tell a story that an informed audience will care about, and it should do so by intelligent interpretation of appropriate evidence from empirical measurements or observations.—Abelson, 1995, p. 2With neither prior mathematical theory nor intensive prior investigation of the data, throwing half a dozen or more exogenous variables into a regression, probit, or novel maximum-likelihood estimator is pointless. No one knows how they are interrelated, and the high-dimensional parameter space will generate a shimmering pseudo-fit like a bright coat of paint on a boat's rotting hull.—Achen, 1999, p. 26
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 54, Heft 1, S. 47-66
ISSN: 1467-9574
In the general vector autoregressive process AR(p), multivariate least square estimation (LSE)/maximum likelihood estimation (MLE) of a subset of the parameters is considered when the complementary subset is suspected to be redundant. This may be viewed as a special case of linear constraints of autoregressive parameters. We incorporate this nonsample information in the estimation process and propose preliminary test and Stein‐type estimators for the target subset of parameters. Under local alternatives their asymptotic properties are investigated and compared with those of unrestricted and restricted LSE. The dominance picture of the estimators is presented.
In: The Manchester School, Band 71, Heft 3, S. 293-307
ISSN: 1467-9957
Despite the abundance of different econometric techniques introduced in the empirical literature on convergence, it is usually assumed that shocks are uncorrelated across countries. This is unlikely for most of the data sets considered and we investigate a possibility so far ignored, namely the annual panel estimator where shocks are allowed to be correlated. Our analysis is restricted to the case of T > N which allows us to estimate by maximum likelihood with an unrestricted variance–covariance matrix of cross‐country shocks. We examine by Monte Carlo robustness against certain possible misspecifications, namely measurement error and heterogeneity of the convergence coefficients.