On Algorithms for Restricted Maximum Likelihood Estimation
In: Mathematics Preprint Archive Vol. 2002, Issue 11, pp 124-156
70 Ergebnisse
Sortierung:
In: Mathematics Preprint Archive Vol. 2002, Issue 11, pp 124-156
SSRN
Working paper
In: Journal of applied mathematics & decision sciences: JAMDS, Band 2006, S. 1-13
ISSN: 1532-7612
We discuss alternative approaches for estimating from cross-sectional categorical data in the presence of
misclassification. Two parameterisations of the misclassification model are reviewed. The first employs misclassification
probabilities and leads to moment-based inference. The second employs calibration probabilities and leads to maximum likelihood inference. We show that maximum likelihood estimation can be alternatively performed by employing misclassification probabilities and a missing data specification. As an alternative to maximum likelihood estimation we propose a quasi-likelihood parameterisation of the misclassification model. In this context an explicit definition of the likelihood function is avoided and a different way of resolving a missing data problem is provided.
Variance estimation for the alternative point estimators is considered. The different approaches are illustrated using real data from the UK Labour Force Survey and simulated data.
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode
SSRN
In: Structural equation modeling: a multidisciplinary journal, Band 10, Heft 3, S. 352-379
ISSN: 1532-8007
In: Risk analysis: an international journal, Band 24, Heft 5, S. 1143-1151
ISSN: 1539-6924
The dose to human and nonhuman individuals inflicted by anthropogenic radiation is an important issue in international and domestic policy. The current paradigm for nonhuman populations asserts that if the dose to the maximally exposed individuals in a population is below a certain criterion (e.g., <10 mGy d−1) then the population is adequately protected. Currently, there is no consensus in the regulatory community as to the best statistical approach. Statistics, currently considered, include the maximum likelihood estimator for the 95th percentile of the sample mean and the sample maximum. Recently, the investigators have proposed the use of the maximum likelihood estimate of a very high quantile as an estimate of dose to the maximally exposed individual. In this study, we compare all of the above‐mentioned statistics to an estimate based on extreme value theory. To determine and compare the bias and variance of these statistics, we use Monte Carlo simulation techniques, in a procedure similar to a parametric bootstrap. Our results show that a statistic based on extreme value theory has the least bias of those considered here, but requires reliable estimates of the population size. We recommend establishing the criterion based on what would be considered acceptable if only a small percentage of the population exceeded the limit, and hence recommend using the maximum likelihood estimator of a high quantile in the case that reliable estimates of the population size are not available.
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 58, Heft 4, S. 440-465
ISSN: 1467-9574
The notion of cointegration has led to a renewed interest in the identification and estimation of structural relations among economic time series. This paper reviews the different approaches that have been put forward in the literature for identifying cointegrating relationships and imposing (possibly over‐identifying) restrictions on them. Next, various algorithms to obtain (approximate) maximum likelihood estimates and likelihood ratio statistics are reviewed, with an emphasis on so‐called switching algorithms. The implementation of these algorithms is discussed and illustrated using an empirical example.
In: Twin research and human genetics: the official journal of the International Society for Twin Studies (ISTS) and the Human Genetics Society of Australasia, Band 9, Heft 3, S. 377-392
ISSN: 1839-2628
Risk is at the center of many policy decisions in companies, governments and other institutions. The risk of road fatalities concerns local governments in planning counter- measures, the risk and severity of counterparty default concerns bank risk managers on a daily basis and the risk of infection has actuarial and epidemiological consequences. However, risk can not be observed directly and it usually varies over time. Measuring risk is therefore an important exercise. In this paper we introduce a general multivariate framework for the time series analysis of risk that is modelled as a latent process. The latent risk time series model extends existing approaches by the simultaneous modelling of (i) the exposure to an event, (ii) the risk of that event occurring and (iii) the severity of the event. First, we discuss existing time series approaches for the analysis of risk which have been applied to road safety, actuarial and epidemiological problems. Seco! nd, we present a general model for the analysis of risk and discuss its statistical treatment based on linear state space methods. Third, we apply the methodology to time series of insurance claims, credit card purchases and road safety. It is shown that the general methodology can be effectively used in the assessment of risk.
BASE
Pricing of cap insurance contracts is considered for political mortgage rates. A simple stochastic process for mortgage rates is proposed. The process is based on renewal processes for modelling the length of periods with downward and upward trend respectively. Prices are calculated by simulation of conditional future sample paths. Future conditional quantiles can be obtained to assess the risk of a contract. The method is illustrated by applying it to observed quarterly mortgage rates of the Swiss Union of Raiffeisenbanks for the years 1970 to 2001.
BASE
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 56, Heft 1, S. 2-22
ISSN: 1467-9574
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.
In: Statistica Neerlandica: journal of the Netherlands Society for Statistics and Operations Research, Band 58, Heft 2, S. 197-219
ISSN: 1467-9574
For a multilevel model with two levels and only a random intercept, the quality of different estimators of the random intercept is examined. Analytical results are given for the marginal model interpretation where negative estimates of the variance components are allowed for. Except for four or five level‐2 units, the Empirical Bayes Estimator (EBE) has a lower average Bayes risk than the Ordinary Least Squares Estimator (OLSE). The EBEs based on restricted maximum likelihood (REML) estimators of the variance components have a lower Bayes risk than the EBEs based on maximum likelihood (ML) estimators. For the hierarchical model interpretation, where estimates of the variance components are restricted being positive, Monte Carlo simulations were done. In this case the EBE has a lower average Bayes risk than the OLSE, also for four or five level‐2 units. For large numbers of level‐1 (30) or level‐2 units (100), the performances of REML‐based and ML‐based EBEs are comparable. For small numbers of level‐1 (10) and level‐2 units (25), the REML‐based EBEs have a lower Bayes risk than ML‐based EBEs only for high intraclass correlations (0.5).
Maximum likelihood (ML) joint detection of multi-carrier code division multiple access (MC-CDMA) systems can be efficiently implemented with a sphere decoding (SD) algorithm. In this paper, we examine the application of complex instead of real SD to detect MC-CDMA, which solves many problems in a more elegant manner and extends SD adaptability to any constellation. We first propose a new complex SD algorithm whose efficiency is based on not requiring an estimate of the initial search radius but selecting the Babai point as the initial sphere radius instead; also, efficient strategies regarding sorting the list of possible lattice points are applied. Indeed, complex SD allows complex matrix operations which are faster than real counterparts in double dimension. Next, a novel lattice representation for the MC-CDMA system is introduced, which allows optimum multiuser detection directly from the received signal. This avoids noise whitening operation, and also despreading and equalization procedures are not required further at the receiver side ; This work has been partly funded by the Spanish government with national project MACAWI (TEC 2005-07477-c02-02) and project MAMBO (UC3M-TEC-05-027)
BASE