Genetics and Heritability Research on Political Decision Making
In: Oxford Research Encyclopedia of Politics
"Genetics and Heritability Research on Political Decision Making" published on by Oxford University Press.
In: Oxford Research Encyclopedia of Politics
"Genetics and Heritability Research on Political Decision Making" published on by Oxford University Press.
In: IZA world of labor: evidence-based policy making
In: This is a pre-print of an article published in the Journal of Economic Dynamics and Control (2017). The final authenticated version is available online at DOI: doi.org/10.1016/j.jedc.2017.09.006
SSRN
Working paper
In: Analytical methods for social research
This volume provides a practical introduction to the method of maximum likelihood as used in social science research. Ward and Ahlquist focus on applied computation in R and use real social science data from actual, published research. Unique among books at this level, it develops simulation-based tools for model evaluation and selection alongside statistical inference. The book covers standard models for categorical data as well as counts, duration data, and strategies for dealing with data missingness. By working through examples, math, and code, the authors build an understanding about the contexts in which maximum likelihood methods are useful and develop skills in translating mathematical statements into executable computer code. Readers will not only be taught to use likelihood-based tools and generate meaningful interpretations, but they will also acquire a solid foundation for continued study of more advanced statistical techniques
Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. ; Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. ; Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804 ; S ; 386 ; 393 ; 30 ; 7
BASE
In: Australian Journal of Agricultural and Resource Economics, Band 58, Heft 1, S. 90-110
SSRN
In: Defence science journal: DSJ, Band 66, Heft 3, S. 278
ISSN: 0011-748X
Estimating the boost-phase trajectory of a ballistic missile using line of sight measurements from space-borne passive sensors is an important issue in missile defense. A well-known difficulty of this issue is the poor-observability of the target motion. A profile-based maximum penalised likelihood estimator is presented, which is expected to work in poor-observability scenarios. Firstly, a more adaptable boost-phase profile is proposed by introducing unknown parameters. Then, the estimator is given based on the Bayesian paradigm. After that, a special penalty for box constraint is constructed based on a mixed distribution. Numerical results for some typical scenarios and sensitivity with respect to a priori information are reported to show that the proposed estimator is promising.
386 393 30 7 ; S ; Maximum likelihood principal component analysis (MLPCA) was originally proposed to incorporate measurement error variance information in principal component analysis (PCA) models. MLPCA can be used to fit PCA models in the presence of missing data, simply by assigning very large variances to the non-measured values. An assessment of maximum likelihood missing data imputation is performed in this paper, analysing the algorithm of MLPCA and adapting several methods for PCA model building with missing data to its maximum likelihood version. In this way, known data regression (KDR), KDR with principal component regression (PCR), KDR with partial least squares regression (PLS) and trimmed scores regression (TSR) methods are implemented within the MLPCA method to work as different imputation steps. Six data sets are analysed using several percentages of missing data, comparing the performance of the original algorithm, and its adapted regression-based methods, with other state-of-the-art methods. Research in this study was partially supported by the Spanish Ministry of Science and Innovation and FEDER funds from the European Union through grant DPI2011-28112-C04-02 and DPI2014-55276-C5-1R, and the Spanish Ministry of Economy and Competitiveness through grant ECO2013-43353-R. Folch Fortuny, A.; Arteaga Moreno, FJ.; Ferrer, A. (2016). Assessment of maximum likelihood PCA missing data imputation. Journal of Chemometrics. 30(7):386-393. https://doi.org/10.1002/cem.2804
BASE
In: Defence science journal: a journal devotet to science & technology in defence, Band 66, Heft 3, S. 278-286
ISSN: 0011-748X
We derive an estimator of the cycle period of a univariate cyclostationary process based on an information-theoretic criterion. Transforming the univariate cyclostationary process into a vector-valued wide-sense stationary process allows us to obtain the structure of the covariance matrix, which is block-Toeplitz, and its block size depends on the unknown cycle period. Therefore, we sweep the block size and obtain the ML estimate of the covariance matrix, required for the information-theoretic criterion. Since there are no closed-form ML estimates of block-Toeplitz matrices, we asymptotically approximate them as block-circulant. Finally, some numerical examples show the good performance of the proposed estimator. ; The work of P. Schreier was supported by the Alfried Krupp von Bohlen und Halbach Foundation, under its program "Return of German scientists from abroad". The work of I. Santamaría and J. Vía was supported by the Spanish Government, Ministerio de Ciencia e Innovación (MICINN), under project RACHEL (TEC2013-47141-C4-3-R). The work of L. Scharf was supported by the Airforce Office of Scientific Research under contract FA9550-10-1-0241.
BASE
An empirical approach to model estimation and evaluation based on Bayesian Maximum Likelihood is introduced to the post-Keynesian literature. To illustrate the method, it is applied to a neo-Kaleckian type of model of Euro Area business cycle fluctuations including endogenous fiscal and monetary policy as well as endogenous wage formation. To evaluate its empirical performance, the marginal likelihood and impulse-responses conditional on the proposed model are contrasted to those conditional on the corresponding Bayesian vector auto-regression models after relaxing the theory-implied cross-coefficient restrictions. The estimated parameter distributions are broadly in line with the empirical literature. Yet, a Bayesian vector auto-regression with loose theory-implied restrictions on the prior outperforms the neo-Kaleckian model considerably indicating misspecification. Further, a baseline Dynamic Stochastic General Equilibrium model is superior in terms of the marginal likelihood. Comparative impulse-response analysis indicates a failure of the neo-Kaleckian model to satisfyingly capture the fiscal and monetary policy transmission mechanisms. ; Ein bayesianischer Maximum-Likelihood-Ansatz zur Modellschätzung und -evaluierung wird in die postkeynesianische Literatur eingeführt. Um die Methode zu illustrieren, wird sie an einem neokaleckianischen Konjunkturzyklusmodell für die Eurozone inklusive Fiskalpolitik, Geldpolitik sowie einer endogenen Lohnbestimmung angewandt. Um die empirische Leistungsfähigkeit des Modells zu evaluieren, werden die marginale Verteilung und Impuls-Reaktionen bedingt auf das vorgeschlagene Model jenen gegenübergestellt, die auf die entsprechenden bayesianischen Vektor-Autoregressionen nach Lockerung der theorie-induzierten Parameterrestriktionen bedingt sind. Die geschätzten Parameterverteilungen stehen weitgehend im Einklang mit der empirischen Literatur. Dennoch übertreffen die bayesianischen Vektor-Autoregressionen mit nur losen theorie-induzierten Parameterrestriktionen der a-priori Wahrscheinlichkeiten das neokaleckianische Model erheblich, was eine Fehlspezifikation des letzteren anzeigt. Darüber hinaus ist ein einfaches Allgemeines Gleichgewichtsmodell überlegen, gemessen an der marginalen Verteilung. Eine vergleichende Analyse der Impuls-Reaktionen suggeriert, dass ein großer Teil der Fehlspezifikation des neokaleckianischen Modells darin begründet liegt, dass es die fiskal- und geldpolitischen Transmissionsmechanismen nicht zufriedenstellend erfassen kann.
BASE
This is the author's version of a work that was accepted for publication in Signal Processing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Signal Processing, volume 175, october 2020, 107661; DOI 10.1016/j.sigpro.2020.107661. ; [EN] Hard-Output Maximum Likelihood (ML) detection for Generalized Spatial Modulation (GSM) systems involves obtaining the ML solution of a number of different MIMO subproblems, with as many possible antenna configurations as subproblems. Obtaining the ML solution of all of the subproblems has a large computational complexity, especially for large GSM MIMO systems. In this paper, we present two techniques for reducing the computational complexity of GSM ML detection. The first technique is based on computing a box optimization bound for each subproblem. This, together with sequential processing of the subproblems, allows fast discarding of many of these subproblems. The second technique is to use a Sphere Detector that is based on box optimization for the solution of the subproblems. This Sphere Detector reduces the number of partial solutions explored in each subproblem. The experiments show that these techniques are very effective in reducing the computational complexity in large MIMO setups. ; This work has been partially supported by Spanish Ministry of Science, Innovation and Universities and by European Union through grant RTI2018-098085-BC41 (MCUI/AEI/FEDER), by GVA through PROMETEO/2019/109 and by Catedra Telefonica-UPV through SSENCE project. ; García Mollá, VM.; Martínez Zaldívar, FJ.; Simarro, MA.; Gonzalez, A. (2020). Maximum likelihood low-complexity GSM detection for large MIMO systems. Signal Processing. 175:1-11. https://doi.org/10.1016/j.sigpro.2020.107661 ; S ; 1 ; 11 ; 175
BASE
We introduce a maximum likelihood ratio test to test if a specific proportion is the greatest in a multinomial situation with a single measurement. The test is based on partitioning the parameter space and utilising logratio transformations.
BASE
In August 2015 a Swedish newspaper claimed that the Sweden Democrats were the largest political party in Sweden based on the results of single poll. We ask ourselves if this is a correct conclusion, considering the fact that the three largest parties in the poll were of roughly the same size. We analyse the parameter space and identify the subspace where the Sweden Democrats are the largest party. Using this we construct a maximum likelihood ratio test and derive its distribution under the null hypothesis. We finally apply our test to the data and obtaining a p-value between 0.09 and 0.14 are able to refute the claim in the newspaper. Based on the available data one cannot draw the conclusion that the Sweden Democrats are the largest party in Sweden.
BASE
WOS: 000447220700021 ; PubMed: 30182312 ; We investigate the role of military expenditure on emission in USA during the period 1960-2015. To achieve the objectives of this study, two measures of military expenditure are utilised, while several timeseries models are constructed with the gross domestic product (GDP) per capita, population, energy consumption per capita, non-renewable energy consumption per capita, renewable energy consumption per capita, urbanisation, trade openness and financial development serving as additional determinants of air pollution. We also use ecological indicator as an alternative measure of pollution. Moreover, different timeseries methods are utilised including a likelihood-based approach with two structural breaks. The output of this research concluded that all the variables are cointegrated. It is found that military expenditure has mixed impact on CO2 emissions. Real GDP per capita, energy consumption per capita, non-renewable energy consumption per capita, population and urbanisation increase CO2 emissions per capita in the long-run, while renewable energy consumption, financial development and trade openness reduce it. There is also evidence for the mixed role of military expenditure, when ecological footprint is utilised as the environmental degradation index. From the output of this research, few policy recommendations are offered for the examined country.
BASE