Approximating maximum likelihood estimators based on bounded random variables
In: Naval research logistics: an international journal, Band 15, Heft 2, S. 169-177
ISSN: 1520-6750
In: Advances in Econometrics Ser. v.26
In: Advances in econometrics v. 26
This collection of methodological developments and applications of simulation-based methods were presented at a workshop at Louisiana State University in November, 2009. Topics include: extensions of the GHK simulator; maximum-simulated likelihood; composite marginal likelihood; and modelling and forecasting volatility in a bayesian approach.
The advent of electronic computing permits the empirical analysis of economic models of far greater subtlety and rigour than before, when many interesting ideas were not followed up because the calculations involved made this impracticable. The estimation and testing of these more intricate models is usually based on the method of Maximum Likelihood, which is a well-established branch of mathematical statistics. Its use in econometrics has led to the development of a number of special techniques; the specific conditions of econometric research moreover demand certain changes in the interpretation of the basic argument. This book is a self-contained introduction to this field. It consists of three parts. The first deals with general features of Maximum Likelihood methods; the second with linear and nonlinear regression; and the third with discrete choice and related micro-economic models. Readers should already be familiar with elementary statistical theory, with applied econometric research papers, or with the literature on the mathematical basis of Maximum Likelihood theory. They can also try their hand at some advanced econometric research of their own
In: Advances in econometrics 26
The economics and statistics literature using computer simulation based methods has grown enormously over the past decades. Maximum Simulated Likelihood is a statistical tool useful for incorporating individual differences (called heterogeneity in the econometrics literature) and variations into a statistical analysis. Problems that can be intractable with traditional methods are solved using computer simulation integrated with classical methods. Instead of assuming that everyone responds to stimuli in the same way, allowances are made for the possibility that different decision makers will respond in different ways. The techniques can be applied to problems of individual choice, such as the choice of a transportation model, or choice among health care options, as well as to the problem of making financial and macroeconomic predictions. Contributors to the volume discuss alternative simulation methods that permit faster and more accurate inference, as well as applications of established methods.
In: Advances in econometrics, 26
The economics and statistics literature using computer simulation based methods has grown enormously over the past decades. Maximum Simulated Likelihood is a statistical tool useful for incorporating individual differences (called heterogeneity in the econometrics literature) and variations into a statistical analysis. Problems that can be intractable with traditional methods are solved using computer simulation integrated with classical methods. Instead of assuming that everyone responds to stimuli in the same way, allowances are made for the possibility that different decision makers will respond in different ways. The techniques can be applied to problems of individual choice, such as the choice of a transportation model, or choice among health care options, as well as to the problem of making financial and macroeconomic predictions. Contributors to the volume discuss alternative simulation methods that permit faster and more accurate inference, as well as applications of established methods.
In: Analytical methods for social research
This volume provides a practical introduction to the method of maximum likelihood as used in social science research. Ward and Ahlquist focus on applied computation in R and use real social science data from actual, published research. Unique among books at this level, it develops simulation-based tools for model evaluation and selection alongside statistical inference. The book covers standard models for categorical data as well as counts, duration data, and strategies for dealing with data missingness. By working through examples, math, and code, the authors build an understanding about the contexts in which maximum likelihood methods are useful and develop skills in translating mathematical statements into executable computer code. Readers will not only be taught to use likelihood-based tools and generate meaningful interpretations, but they will also acquire a solid foundation for continued study of more advanced statistical techniques
Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. ; Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. ; Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804 ; S ; 386 ; 393 ; 30 ; 7
BASE
386 393 30 7 ; S ; Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804
BASE
We derive an estimator of the cycle period of a univariate cyclostationary process based on an information-theoretic criterion. Transforming the univariate cyclostationary process into a vector-valued wide-sense stationary process allows us to obtain the structure of the covariance matrix, which is block-Toeplitz, and its block size depends on the unknown cycle period. Therefore, we sweep the block size and obtain the ML estimate of the covariance matrix, required for the information-theoretic criterion. Since there are no closed-form ML estimates of block-Toeplitz matrices, we asymptotically approximate them as block-circulant. Finally, some numerical examples show the good performance of the proposed estimator. ; The work of P. Schreier was supported by the Alfried Krupp von Bohlen und Halbach Foundation, under its program "Return of German scientists from abroad". The work of I. Santamaría and J. Vía was supported by the Spanish Government, Ministerio de Ciencia e Innovación (MICINN), under project RACHEL (TEC2013-47141-C4-3-R). The work of L. Scharf was supported by the Airforce Office of Scientific Research under contract FA9550-10-1-0241.
BASE
A latent space model for a family of random graphs assigns real-valued vectors to nodes of the graph such that edge probabilities are determined by latent positions. Latent space models provide a natural statistical framework for graph visualizing and clustering. A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases. Here, we consider a closely related latent space model, the Logistic RDPG, which uses a logistic link function to map from latent positions to edge likelihoods. Over this model, we show that asymptotically exact maximum likelihood inference of latent position vectors can be achieved using an efficient spectral method. Our method involves computing top eigenvectors of a normalized adjacency matrix and scaling eigenvectors using a regression step. The novel regression scaling step is an essential part of the proposed method. In simulations, we show that our proposed method is more accurate and more robust than common practices. We also show the effectiveness of our approach over standard real networks of the karate club and political blogs.
BASE
An empirical approach to model estimation and evaluation based on Bayesian Maximum Likelihood is introduced to the post-Keynesian literature. To illustrate the method, it is applied to a neo-Kaleckian type of model of Euro Area business cycle fluctuations including endogenous fiscal and monetary policy as well as endogenous wage formation. To evaluate its empirical performance, the marginal likelihood and impulse-responses conditional on the proposed model are contrasted to those conditional on the corresponding Bayesian vector auto-regression models after relaxing the theory-implied cross-coefficient restrictions. The estimated parameter distributions are broadly in line with the empirical literature. Yet, a Bayesian vector auto-regression with loose theory-implied restrictions on the prior outperforms the neo-Kaleckian model considerably indicating misspecification. Further, a baseline Dynamic Stochastic General Equilibrium model is superior in terms of the marginal likelihood. Comparative impulse-response analysis indicates a failure of the neo-Kaleckian model to satisfyingly capture the fiscal and monetary policy transmission mechanisms. ; Ein bayesianischer Maximum-Likelihood-Ansatz zur Modellschätzung und -evaluierung wird in die postkeynesianische Literatur eingeführt. Um die Methode zu illustrieren, wird sie an einem neokaleckianischen Konjunkturzyklusmodell für die Eurozone inklusive Fiskalpolitik, Geldpolitik sowie einer endogenen Lohnbestimmung angewandt. Um die empirische Leistungsfähigkeit des Modells zu evaluieren, werden die marginale Verteilung und Impuls-Reaktionen bedingt auf das vorgeschlagene Model jenen gegenübergestellt, die auf die entsprechenden bayesianischen Vektor-Autoregressionen nach Lockerung der theorie-induzierten Parameterrestriktionen bedingt sind. Die geschätzten Parameterverteilungen stehen weitgehend im Einklang mit der empirischen Literatur. Dennoch übertreffen die bayesianischen Vektor-Autoregressionen mit nur losen theorie-induzierten Parameterrestriktionen der a-priori Wahrscheinlichkeiten das neokaleckianische Model erheblich, was eine Fehlspezifikation des letzteren anzeigt. Darüber hinaus ist ein einfaches Allgemeines Gleichgewichtsmodell überlegen, gemessen an der marginalen Verteilung. Eine vergleichende Analyse der Impuls-Reaktionen suggeriert, dass ein großer Teil der Fehlspezifikation des neokaleckianischen Modells darin begründet liegt, dass es die fiskal- und geldpolitischen Transmissionsmechanismen nicht zufriedenstellend erfassen kann.
BASE
This is the author's version of a work that was accepted for publication in Signal Processing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Signal Processing, volume 175, october 2020, 107661; DOI 10.1016/j.sigpro.2020.107661. ; [EN] Hard-Output Maximum Likelihood (ML) detection for Generalized Spatial Modulation (GSM) systems involves obtaining the ML solution of a number of different MIMO subproblems, with as many possible antenna configurations as subproblems. Obtaining the ML solution of all of the subproblems has a large computational complexity, especially for large GSM MIMO systems. In this paper, we present two techniques for reducing the computational complexity of GSM ML detection. The first technique is based on computing a box optimization bound for each subproblem. This, together with sequential processing of the subproblems, allows fast discarding of many of these subproblems. The second technique is to use a Sphere Detector that is based on box optimization for the solution of the subproblems. This Sphere Detector reduces the number of partial solutions explored in each subproblem. The experiments show that these techniques are very effective in reducing the computational complexity in large MIMO setups. ; This work has been partially supported by Spanish Ministry of Science, Innovation and Universities and by European Union through grant RTI2018-098085-BC41 (MCUI/AEI/FEDER), by GVA through PROMETEO/2019/109 and by Catedra Telefonica-UPV through SSENCE project. ; García Mollá, VM.; Martínez Zaldívar, FJ.; Simarro, MA.; Gonzalez, A. (2020). Maximum likelihood low-complexity GSM detection for large MIMO systems. Signal Processing. 175:1-11. https://doi.org/10.1016/j.sigpro.2020.107661 ; S ; 1 ; 11 ; 175
BASE
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode
We introduce a maximum likelihood ratio test to test if a specific proportion is the greatest in a multinomial situation with a single measurement. The test is based on partitioning the parameter space and utilising logratio transformations.
BASE
In August 2015 a Swedish newspaper claimed that the Sweden Democrats were the largest political party in Sweden based on the results of single poll. We ask ourselves if this is a correct conclusion, considering the fact that the three largest parties in the poll were of roughly the same size. We analyse the parameter space and identify the subspace where the Sweden Democrats are the largest party. Using this we construct a maximum likelihood ratio test and derive its distribution under the null hypothesis. We finally apply our test to the data and obtaining a p-value between 0.09 and 0.14 are able to refute the claim in the newspaper. Based on the available data one cannot draw the conclusion that the Sweden Democrats are the largest party in Sweden.
BASE