Maximum likelihood (ML) is a popular and effective estimator for a wide range of diverse applications and currently affords the most accurate estimation for source localisation in wireless sensor networks (WSN). ML however has two major shortcomings namely, that it is a biased estimator and is also highly sensitive to parameter perturbations. An Optimisation to ML (OML) algorithm was introduced that minimises the sum-of-squares bias and exhibits superior performance to ML in statistical estimation, particularly with finite datasets. This paper proposes a new model for acoustic source localisation in WSN, based upon the OML estimation process. In addition to the performance analysis using real world field experimental data for the tracking of moving military vehicles, simulations have been performed upon the more complex source localisation and tracking problem, to verify the potential of the new OML-based model.
Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. ; Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. ; Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804 ; S ; 386 ; 393 ; 30 ; 7
386 393 30 7 ; S ; Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804
We derive an estimator of the cycle period of a univariate cyclostationary process based on an information-theoretic criterion. Transforming the univariate cyclostationary process into a vector-valued wide-sense stationary process allows us to obtain the structure of the covariance matrix, which is block-Toeplitz, and its block size depends on the unknown cycle period. Therefore, we sweep the block size and obtain the ML estimate of the covariance matrix, required for the information-theoretic criterion. Since there are no closed-form ML estimates of block-Toeplitz matrices, we asymptotically approximate them as block-circulant. Finally, some numerical examples show the good performance of the proposed estimator. ; The work of P. Schreier was supported by the Alfried Krupp von Bohlen und Halbach Foundation, under its program "Return of German scientists from abroad". The work of I. Santamaría and J. Vía was supported by the Spanish Government, Ministerio de Ciencia e Innovación (MICINN), under project RACHEL (TEC2013-47141-C4-3-R). The work of L. Scharf was supported by the Airforce Office of Scientific Research under contract FA9550-10-1-0241.
A latent space model for a family of random graphs assigns real-valued vectors to nodes of the graph such that edge probabilities are determined by latent positions. Latent space models provide a natural statistical framework for graph visualizing and clustering. A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases. Here, we consider a closely related latent space model, the Logistic RDPG, which uses a logistic link function to map from latent positions to edge likelihoods. Over this model, we show that asymptotically exact maximum likelihood inference of latent position vectors can be achieved using an efficient spectral method. Our method involves computing top eigenvectors of a normalized adjacency matrix and scaling eigenvectors using a regression step. The novel regression scaling step is an essential part of the proposed method. In simulations, we show that our proposed method is more accurate and more robust than common practices. We also show the effectiveness of our approach over standard real networks of the karate club and political blogs.
An empirical approach to model estimation and evaluation based on Bayesian Maximum Likelihood is introduced to the post-Keynesian literature. To illustrate the method, it is applied to a neo-Kaleckian type of model of Euro Area business cycle fluctuations including endogenous fiscal and monetary policy as well as endogenous wage formation. To evaluate its empirical performance, the marginal likelihood and impulse-responses conditional on the proposed model are contrasted to those conditional on the corresponding Bayesian vector auto-regression models after relaxing the theory-implied cross-coefficient restrictions. The estimated parameter distributions are broadly in line with the empirical literature. Yet, a Bayesian vector auto-regression with loose theory-implied restrictions on the prior outperforms the neo-Kaleckian model considerably indicating misspecification. Further, a baseline Dynamic Stochastic General Equilibrium model is superior in terms of the marginal likelihood. Comparative impulse-response analysis indicates a failure of the neo-Kaleckian model to satisfyingly capture the fiscal and monetary policy transmission mechanisms. ; Ein bayesianischer Maximum-Likelihood-Ansatz zur Modellschätzung und -evaluierung wird in die postkeynesianische Literatur eingeführt. Um die Methode zu illustrieren, wird sie an einem neokaleckianischen Konjunkturzyklusmodell für die Eurozone inklusive Fiskalpolitik, Geldpolitik sowie einer endogenen Lohnbestimmung angewandt. Um die empirische Leistungsfähigkeit des Modells zu evaluieren, werden die marginale Verteilung und Impuls-Reaktionen bedingt auf das vorgeschlagene Model jenen gegenübergestellt, die auf die entsprechenden bayesianischen Vektor-Autoregressionen nach Lockerung der theorie-induzierten Parameterrestriktionen bedingt sind. Die geschätzten Parameterverteilungen stehen weitgehend im Einklang mit der empirischen Literatur. Dennoch übertreffen die bayesianischen Vektor-Autoregressionen mit nur losen theorie-induzierten Parameterrestriktionen der a-priori Wahrscheinlichkeiten das neokaleckianische Model erheblich, was eine Fehlspezifikation des letzteren anzeigt. Darüber hinaus ist ein einfaches Allgemeines Gleichgewichtsmodell überlegen, gemessen an der marginalen Verteilung. Eine vergleichende Analyse der Impuls-Reaktionen suggeriert, dass ein großer Teil der Fehlspezifikation des neokaleckianischen Modells darin begründet liegt, dass es die fiskal- und geldpolitischen Transmissionsmechanismen nicht zufriedenstellend erfassen kann.
This is the author's version of a work that was accepted for publication in Signal Processing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Signal Processing, volume 175, october 2020, 107661; DOI 10.1016/j.sigpro.2020.107661. ; [EN] Hard-Output Maximum Likelihood (ML) detection for Generalized Spatial Modulation (GSM) systems involves obtaining the ML solution of a number of different MIMO subproblems, with as many possible antenna configurations as subproblems. Obtaining the ML solution of all of the subproblems has a large computational complexity, especially for large GSM MIMO systems. In this paper, we present two techniques for reducing the computational complexity of GSM ML detection. The first technique is based on computing a box optimization bound for each subproblem. This, together with sequential processing of the subproblems, allows fast discarding of many of these subproblems. The second technique is to use a Sphere Detector that is based on box optimization for the solution of the subproblems. This Sphere Detector reduces the number of partial solutions explored in each subproblem. The experiments show that these techniques are very effective in reducing the computational complexity in large MIMO setups. ; This work has been partially supported by Spanish Ministry of Science, Innovation and Universities and by European Union through grant RTI2018-098085-BC41 (MCUI/AEI/FEDER), by GVA through PROMETEO/2019/109 and by Catedra Telefonica-UPV through SSENCE project. ; García Mollá, VM.; Martínez Zaldívar, FJ.; Simarro, MA.; Gonzalez, A. (2020). Maximum likelihood low-complexity GSM detection for large MIMO systems. Signal Processing. 175:1-11. https://doi.org/10.1016/j.sigpro.2020.107661 ; S ; 1 ; 11 ; 175
We introduce a maximum likelihood ratio test to test if a specific proportion is the greatest in a multinomial situation with a single measurement. The test is based on partitioning the parameter space and utilising logratio transformations.
In August 2015 a Swedish newspaper claimed that the Sweden Democrats were the largest political party in Sweden based on the results of single poll. We ask ourselves if this is a correct conclusion, considering the fact that the three largest parties in the poll were of roughly the same size. We analyse the parameter space and identify the subspace where the Sweden Democrats are the largest party. Using this we construct a maximum likelihood ratio test and derive its distribution under the null hypothesis. We finally apply our test to the data and obtaining a p-value between 0.09 and 0.14 are able to refute the claim in the newspaper. Based on the available data one cannot draw the conclusion that the Sweden Democrats are the largest party in Sweden.
WOS: 000447220700021 ; PubMed: 30182312 ; We investigate the role of military expenditure on emission in USA during the period 1960-2015. To achieve the objectives of this study, two measures of military expenditure are utilised, while several timeseries models are constructed with the gross domestic product (GDP) per capita, population, energy consumption per capita, non-renewable energy consumption per capita, renewable energy consumption per capita, urbanisation, trade openness and financial development serving as additional determinants of air pollution. We also use ecological indicator as an alternative measure of pollution. Moreover, different timeseries methods are utilised including a likelihood-based approach with two structural breaks. The output of this research concluded that all the variables are cointegrated. It is found that military expenditure has mixed impact on CO2 emissions. Real GDP per capita, energy consumption per capita, non-renewable energy consumption per capita, population and urbanisation increase CO2 emissions per capita in the long-run, while renewable energy consumption, financial development and trade openness reduce it. There is also evidence for the mixed role of military expenditure, when ecological footprint is utilised as the environmental degradation index. From the output of this research, few policy recommendations are offered for the examined country.
(1) Background: The vaccine supply is likely to be limited in 2021 due to constraints in manufacturing. To maximize the benefit from the rollout phase, an optimal strategy of vaccine allocation is necessary based on each country's epidemic status. (2) Methods: We first developed a heterogeneous population model considering the transmission matrix using maximum likelihood estimation based on the epidemiological records of individual COVID-19 cases in the Republic of Korea. Using this model, the vaccine priorities for minimizing mortality or incidence were investigated. (3) Results: The simulation results showed that the optimal vaccine allocation strategy to minimize the mortality (or incidence) was to prioritize elderly and healthcare workers (or adults) as long as the reproductive number was below [Formula: see text] (or over [Formula: see text]). (4) Conclusion: Our simulation results support the current Korean government vaccination priority strategy, which prioritizes healthcare workers and senior groups to minimize mortality, under the condition that the reproductive number remains below [Formula: see text]. This study revealed that, in order to maintain the current vaccine priority policy, it is important to ensure that the reproductive number does not exceed the threshold by concurrently implementing nonpharmaceutical interventions.
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. ; Solving Macroeconomic Problems Using Non-Stationary Panel Data
In: Elff , M , Heisig , J P , Schaeffer , M & Shikano , S 2021 , ' Multilevel Analysis with Few Clusters : Improving Likelihood-Based Methods to Provide Unbiased Estimates and Accurate Inference ' , British Journal of Political Science , vol. 51 , no. 1 , pp. 412 - 426 . https://doi.org/10.1017/S0007123419000097
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units.
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units. ; Accepted for publication: Feb. 2019
ABSTRACT: In this paper local empirical likelihood-based inference for non-parametric varying coefficient panel data models with fixed effects is investigated. First, we show that the naive empirical likelihood ratio is asymptotically standard chi-squared when undersmoothing is employed. The ratio is self-scale invariant and the plug-in estimate of the limiting variance is not needed. Second, mean-corrected and residual-adjusted empirical likelihood ratios are proposed. The main interest of these techniques is that without undersmoothing, both also have standard chi-squared limit distributions. As a by product, we propose also two empirical maximum likelihood estimators of the varying coefficient models and their derivatives. We also obtain the asymptotic distribution of these estimators. Furthermore, a non parametric version of the Wilk?s theorem is derived. To show the feasibility of the technique and to analyse its small sample properties, using empirical likelihood-based inference we implement a Monte Carlo simulation exercise and we also illustrated the proposed technique in an empirical analysis about the production efficiency of the European Union?s companies.