ABSTRACT: In this paper local empirical likelihood-based inference for non-parametric varying coefficient panel data models with fixed effects is investigated. First, we show that the naive empirical likelihood ratio is asymptotically standard chi-squared when undersmoothing is employed. The ratio is self-scale invariant and the plug-in estimate of the limiting variance is not needed. Second, mean-corrected and residual-adjusted empirical likelihood ratios are proposed. The main interest of these techniques is that without undersmoothing, both also have standard chi-squared limit distributions. As a by product, we propose also two empirical maximum likelihood estimators of the varying coefficient models and their derivatives. We also obtain the asymptotic distribution of these estimators. Furthermore, a non parametric version of the Wilk?s theorem is derived. To show the feasibility of the technique and to analyse its small sample properties, using empirical likelihood-based inference we implement a Monte Carlo simulation exercise and we also illustrated the proposed technique in an empirical analysis about the production efficiency of the European Union?s companies.
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units. ; Accepted for publication: Feb. 2019
(1) Background: The vaccine supply is likely to be limited in 2021 due to constraints in manufacturing. To maximize the benefit from the rollout phase, an optimal strategy of vaccine allocation is necessary based on each country's epidemic status. (2) Methods: We first developed a heterogeneous population model considering the transmission matrix using maximum likelihood estimation based on the epidemiological records of individual COVID-19 cases in the Republic of Korea. Using this model, the vaccine priorities for minimizing mortality or incidence were investigated. (3) Results: The simulation results showed that the optimal vaccine allocation strategy to minimize the mortality (or incidence) was to prioritize elderly and healthcare workers (or adults) as long as the reproductive number was below [Formula: see text] (or over [Formula: see text]). (4) Conclusion: Our simulation results support the current Korean government vaccination priority strategy, which prioritizes healthcare workers and senior groups to minimize mortality, under the condition that the reproductive number remains below [Formula: see text]. This study revealed that, in order to maintain the current vaccine priority policy, it is important to ensure that the reproductive number does not exceed the threshold by concurrently implementing nonpharmaceutical interventions.
We derive an estimator of the cycle period of a univariate cyclostationary process based on an information-theoretic criterion. Transforming the univariate cyclostationary process into a vector-valued wide-sense stationary process allows us to obtain the structure of the covariance matrix, which is block-Toeplitz, and its block size depends on the unknown cycle period. Therefore, we sweep the block size and obtain the ML estimate of the covariance matrix, required for the information-theoretic criterion. Since there are no closed-form ML estimates of block-Toeplitz matrices, we asymptotically approximate them as block-circulant. Finally, some numerical examples show the good performance of the proposed estimator. ; The work of P. Schreier was supported by the Alfried Krupp von Bohlen und Halbach Foundation, under its program "Return of German scientists from abroad". The work of I. Santamaría and J. Vía was supported by the Spanish Government, Ministerio de Ciencia e Innovación (MICINN), under project RACHEL (TEC2013-47141-C4-3-R). The work of L. Scharf was supported by the Airforce Office of Scientific Research under contract FA9550-10-1-0241.
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. ; Solving Macroeconomic Problems Using Non-Stationary Panel Data
We introduce a maximum likelihood ratio test to test if a specific proportion is the greatest in a multinomial situation with a single measurement. The test is based on partitioning the parameter space and utilising logratio transformations.
In August 2015 a Swedish newspaper claimed that the Sweden Democrats were the largest political party in Sweden based on the results of single poll. We ask ourselves if this is a correct conclusion, considering the fact that the three largest parties in the poll were of roughly the same size. We analyse the parameter space and identify the subspace where the Sweden Democrats are the largest party. Using this we construct a maximum likelihood ratio test and derive its distribution under the null hypothesis. We finally apply our test to the data and obtaining a p-value between 0.09 and 0.14 are able to refute the claim in the newspaper. Based on the available data one cannot draw the conclusion that the Sweden Democrats are the largest party in Sweden.
The dose to human and nonhuman individuals inflicted by anthropogenic radiation is an important issue in international and domestic policy. The current paradigm for nonhuman populations asserts that if the dose to the maximally exposed individuals in a population is below a certain criterion (e.g., <10 mGy d−1) then the population is adequately protected. Currently, there is no consensus in the regulatory community as to the best statistical approach. Statistics, currently considered, include the maximum likelihood estimator for the 95th percentile of the sample mean and the sample maximum. Recently, the investigators have proposed the use of the maximum likelihood estimate of a very high quantile as an estimate of dose to the maximally exposed individual. In this study, we compare all of the above‐mentioned statistics to an estimate based on extreme value theory. To determine and compare the bias and variance of these statistics, we use Monte Carlo simulation techniques, in a procedure similar to a parametric bootstrap. Our results show that a statistic based on extreme value theory has the least bias of those considered here, but requires reliable estimates of the population size. We recommend establishing the criterion based on what would be considered acceptable if only a small percentage of the population exceeded the limit, and hence recommend using the maximum likelihood estimator of a high quantile in the case that reliable estimates of the population size are not available.
The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP), a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables.) A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP) problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.
The notion of cointegration has led to a renewed interest in the identification and estimation of structural relations among economic time series. This paper reviews the different approaches that have been put forward in the literature for identifying cointegrating relationships and imposing (possibly over‐identifying) restrictions on them. Next, various algorithms to obtain (approximate) maximum likelihood estimates and likelihood ratio statistics are reviewed, with an emphasis on so‐called switching algorithms. The implementation of these algorithms is discussed and illustrated using an empirical example.
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units.
This paper examines the impact of externalities on employment growth in sub-regions of Great Britain by estimating OLS and maximum likelihood spatial models at the 2-digit level for 23 sectors. Issues arising from relatedness, sector differences, competition, cross-boundary spillovers and spatial autocorrelation are explicitly addressed. Results indicate that specialisation has a generally negative impact on growth whilst the impact of diversity is heterogeneous across sectors and strong local competition has a typically positive impact. The results question the merits of policies primarily aimed at promoting regional specialisation and suggest that diversity, local competition and sector heterogeneity are important policy issues.
WOS: 000447220700021 ; PubMed: 30182312 ; We investigate the role of military expenditure on emission in USA during the period 1960-2015. To achieve the objectives of this study, two measures of military expenditure are utilised, while several timeseries models are constructed with the gross domestic product (GDP) per capita, population, energy consumption per capita, non-renewable energy consumption per capita, renewable energy consumption per capita, urbanisation, trade openness and financial development serving as additional determinants of air pollution. We also use ecological indicator as an alternative measure of pollution. Moreover, different timeseries methods are utilised including a likelihood-based approach with two structural breaks. The output of this research concluded that all the variables are cointegrated. It is found that military expenditure has mixed impact on CO2 emissions. Real GDP per capita, energy consumption per capita, non-renewable energy consumption per capita, population and urbanisation increase CO2 emissions per capita in the long-run, while renewable energy consumption, financial development and trade openness reduce it. There is also evidence for the mixed role of military expenditure, when ecological footprint is utilised as the environmental degradation index. From the output of this research, few policy recommendations are offered for the examined country.
In: Twin research and human genetics: the official journal of the International Society for Twin Studies (ISTS) and the Human Genetics Society of Australasia, Band 19, Heft 6, S. 638-646
Behavioral traits generally show moderate to strong genetic influence, with heritability estimates of around 50%. Some recent research has suggested that trust may be an exception because it is more strongly influenced by social interactions. In a sample of over 7,000 adolescent twins from the United Kingdom's Twins Early Development Study, we found broad sense heritability estimates of 57% for generalized trust and 51% for trust in friends. Genomic-relatedness-matrix restricted maximum likelihood (GREML) estimates in the same sample indicate that 21% of the narrow sense genetic variance can be explained by common single nucleotide polymorphisms for generalized trust and 43% for trust in friends. As expected, this implies a large amount of unexplained heritability, although power is low for estimating DNA-based heritability. The missing heritability may be accounted for by interactions between DNA and the social environment during development or via gene–environment correlations with rare variants. How these genes and environments correlate seem especially important for the development of trust.