The problem of detecting a major change point in a stochastic process is often of interest in applications, in particular when the effects of modifications of some external variables, on the process itself, must be identified. We here propose a modification of the classical Pearson Chisquare test to detect the presence of such major change point in the transition probabilities of an inhomogeneous discrete time Markov Chain, taking values in a finite space. The test can be applied also in presence of big identically distributed samples of the Markov Chain under study, which might not be necessarily independent. The test is based on the maximum likelihood estimate of the size of the 'right' experimental unit, i.e. the units that must be aggregated to filter out the small scale variability of the transition probabilities. We here apply our test both to simulated data and to a real dataset, to study the impact, on farmland uses, of the new Common Agricultural Policy, which entered into force in EU in 2015.
ABSTRACT This research verifies the relationship between the lack of action of the FUNDEB Monitoring and Social Control Councils with the indicators of active waste (corruption) and liabilities (mismanagement). The main sources of data used were CGU's audit reports and the websites of FNDE, IBGE and UNDP. The sample is composed of 173 municipalities supervised by the auditors of the CGU, drawn in the 38th, 39th and 40th public draws. The methodological strategies used were the content analysis and the regression for count-data, based on the negative binomial model by maximum likelihood. The results indicated a positive and significant relationship between the lack of action of the FUNDEB Monitoring and Social Control Council and the waste of public resources allocated to basic education related to poor management. This result has a general implication in evaluating the effectiveness of the FUNDEB Control Council's audit function, in accordance with its attributions established by the government's policies and norms.
This paper studies the effects of the European Union (EU) regional policy transfers on internal migration across regions in Poland for the period 2004-2014. Based on a gravity model of migration, it tests empirically using the Poisson Pseudo-Maximum Likelihood (PPML) estimator whether EU transfers affect the level and the pattern of bilateral migration flows. For the first eleven years of the EU membership, the study finds no evidence of EU funding discouraging residents' mobility. On the contrary, residents of regions with higher EU transfers attracted are relatively more likely to leave. This effect is especially significant in poorer regions. In addition, EU transfers help regions attract more migrants. Both the "push" and the "pull" effect of the EU transfers on migration intensified over time. Hence, EU regional funding did not hamper, but rather fostered internal migration in Poland leading to a higher regional concentration of population and prosperity. This, however, works against the objective of the EU regional policy, which is to promote economic and social convergence across regions.
This paper focuses on a linear model with noisy inputs in which the performance of the conventional Total Least Squares (TLS) approach is (maybe surprisingly) far from satisfactory. Under the typical Gaussian assumption, we obtain the maximum likelihood (ML) estimator of the system response. This estimator promotes a reasonable balance between the empirical and theoretical variances of the residual errors, which suggests the name of Balanced Least Squares (BLS). The solution of the associated optimization problem is based on its reformulation as a rank constrained semidefinite program (SDP), for which we show that the relaxation is tight with probability one. Both TLS and BLS can be seen as regularized LS estimators, but the (possibly negative) regularization in BLS is softer than its TLS counterpart, which avoids the inconsistency of TLS in our particular model. ; This work has been supported by the Spanish Government, Ministerio de Ciencia e Innovación, under project RACHEL (TEC2013-47141-C4-3-R).
Technical report TR08-02. We consider the problem of vehicle classification using acoustic signals captured within a sensor network. The sensors perform collaborative decision and/or data fusion in a distributed and energy efficient manner. We present a distributed cluster-based algorithm, where sensors form clusters on-demand for the sake of running the classification task. We aim at minimizing the energy costs incurred due to the transmission of the feature vectors among collaborating sensors within a cluster. To this end, we present schemes to generate effective feature vectors of low dimension. An experimental study has been conducted using real acoustic signals of military vehicles recorded during DARPA's Sensit/IXOs project. The features generated through our proposed schemes are evaluated using K-Nearest Neighbor (k-NN) and Maximum Likelihood (ML) classifiers. Performance results indicate that the proposed schemes are effective in terms of classification accuracy, and can even outperform previously proposed approaches, but, in addition, they are also efficient in terms of communication overhead. | TRID-ID TR08-02
We model panel data of crime careers of juveniles from a Dutch Judicial Juvenile Institution. The data are decomposed into a systematic and an individual‐specific component, of which the systematic component reflects the general time‐varying conditions including the criminological climate. Within a model‐based analysis, we treat (1) shared effects of each group with the same systematic conditions, (2) strongly non‐Gaussian features of the individual time series, (3) unobserved common systematic conditions, (4) changing recidivism probabilities in continuous time and (5) missing observations. We adopt a non‐Gaussian multivariate state‐space model that deals with all these issues simultaneously. The parameters of the model are estimated by Monte Carlo maximum likelihood methods. This paper illustrates the methods empirically. We compare continuous time trends and standard discrete‐time stochastic trend specifications. We find interesting common time variation in the recidivism behaviour of the juveniles during a period of 13 years, while taking account of significant heterogeneity determined by personality characteristics and initial crime records.
The aim of this research is to explore the assignment of fiscal responsibility among levels of government and to evaluate the relative importance of socio-economic development and political-administrative conditions as determinants that explain the variations of fical decentralization policy of government in Korea. The empirical work was based on theoretical work in the area. Time-series data are used to include political and administrative variables which had not been included in previous empirical studies of this area. Regression equations were estimated by Ordinary Least Squares, Cochrane- Orcutt, and Maximum Likelihood techniques, whichever allowed for autocorrelation among the explanatory variables. The empirical results of the determinants study support much of the theoretical work in the area of fiscal decentralization. Socio-economic development, political democracy instability, and administrative factors primarily explain the variations in fiscal decentralization. Findings in this study suggest that if the goal of intergovernmental fiscal policy is to encourage fiscal decentralization or composite decentralization, then it should emphasize the political and administrative factors as well as the socio-economic development factor.
Part I of this research examined the transformation of British politics in the 1885-1910 period: the expansion of suffrage in 1885, the incorporation of the newly enfranchised into political life, and the resulting changes in party alignments. Latent variable partial least-squares (LVPLS) algorithms generated dynamic across-time models of political change. These models were then tested with maximum likelihood (ML) methods. The research concluded that the seeds of realignment emerged in the period as the Labour movement successfully mobilized class voting while the Liberal party became progressively isolated from middle-class Nonconformism. Part II of the research examines the 1918-1950 period for further evidence of partisan realignment following the expansion of the electorate in 1918 and 1928 as Britain approached the "one adult, one value, one vote" ideal. LVPLS and ML models support the previous conclusions by highlighting the stability of party alignments during the period, which are increasingly supported by class-based voting but not to the exclusion of religion for the Conservatives.
Statistical procedures are developed to estimate accident occurrence rates from historical event records, to predict future rates and trends, and to estimate the accuracy of the rate estimates and predictions. Maximum likelihood estimation is applied to several learning models and results are compared to earlier graphical and analytical estimates. The models are based on (1) the cumulative number of operating years, (2) the cumulative number of plants built, and (3) accidents (explicitly), with the accident rate distinctly different before and after an accident. The statistical accuracies of the parameters estimated are obtained in analytical form using the Fisher information matrix. Using data on core damage accidents in electricity producing plants, it is estimated that the probability for a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the equivalent frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year, partly due to the increasing population of plants.
Seit der Abschaffung des Quotensystems 1994 ist der Einfluss einer Mitgliedschaft in einem Internationalen Kaffeeabkommen (ICA) auf den Kaffeehandel unklar. Diese Diplomarbeit erforscht, ob Mitglieder der Internationalen Kaffeeabkommen 2001 und 2007 höhere Kaffeehandelswerte als Nicht-Mitglieder aufweisen. Anstatt die Preise zu regulieren, hat das ICA heutzutage nur mehr eine unterstützende Funktion, um eine nachhaltige Kaffeewirtschaft zu fördern. Die empirische Analyse wurde von 2001 bis 2017 durchgeführt. Der verwendete Paneldatensatz, zusammengesetzt aus Daten von UN Comtrade, BACI, CEPII, der Weltbank, der Internationalen Kaffeeorganisation und Mario Larch, enthält 33,332 Beobachtungen für bilaterale Kaffeehandelsflüsse für 189 Länder. Für die Schätzungen, welche auf dem Gravitationsmodell basieren, wurde sowohl der Ordinary Least Squared als auch der Poisson Pseudo-maximum Likelihood-Schätzer verwendet. Die Ergebnisse zeigen, dass exportierende Mitglieder höhere Handelswerte und importierende Mitglieder niedrigere Handelsflüsse als Nicht-Mitglieder aufweisen. Der Einfluss der Mitgliedschaft ist nicht eindeutig, wenn beide Handelspartner Mitglieder sind. ; Since the abolishment of the quota system in 1994, the remaining impact of a mem- bership in an International Coffee Agreement (ICA) is unclear. This thesis investigates whether members of the ICAs 2001 and 2007 experienced higher coffee trade values than non-member countries. Instead of regulating the prices, and therefore giving its members a clear advantage, nowadays, the ICA only has a supportive function, namely promoting a sustainable coffee economy. As part of this endeavour, the International Coffee Organ- isation, the institution behind the ICA, funds projects to increase food security, provides developing countries with information about financial aids and in general serves as an information and convention centre for coffee matters. The empirical analysis was conducted from 2001 until 2017 using a panel data set with bilateral coffee trade flows for 189 countries. The final data set contains 33,332 observa- tions and was merged from databases of UN Comtrade, BACI, CEPII, the World Bank, the International Coffee Organisation and Mario Larch. For the estimations, which are based on the gravity model of trade, either the Ordinary Least Squared (OLS) or Poisson Pseudo-maximum Likelihood estimator are used. The results showed that exporting members experienced higher coffee trade values than non-members, although the coefficients are only partly significant, while the trade value of importing members was significantly lower compared to non-member importers. However, the impact is unclear, when both trading partners are members of the ICA. ; by Anna Claudia Scartezzini ; University of Innsbruck, Diplomarbeit, 2019 ; (VLID)3576693
A change emerges in hospital landscape due to health political measures, which in consequence also influences the pre-clinical medical care of emergencies. The main focus of this study was to gather information about emergency medical care after traffic accidents on the basis of data of Bavarian emergency medical services. In Bavaria, in 2006 it was necessary to call an emergency doctor in the case of 14.261 traffic accidents. Predominantly the patients were provided by land-based life saving appliances, air rescue services were only applied in 19.1 % of the cases. 47.6 % of patients being involved in a traffic accident were transported into a primary health care hospital. A prehospital interval of more than 60 minutes was calculated in 20 % of emergency care. 96.2 % of the patients were transported to hospitals of tertiary or maximum supply by air rescue services. The life saving appliances" readiness for action is however restricted to daylight. A further limitation appeared for routine office hours in hospitals: Only 36.7 % of accidents occurred in this time frame. An increase of hospitalizations in clinics of maximum supply appeared from 2002 until 2006 while simultaneously the prehospital period was extended. To assure a sufficient medical care of seriously injured persons further on, a fulltime and area-wide expostulation of efficient facilities is necessary. For this purpose it is necessary to establish regional trauma networks as well as emergency medical service at night time. Beyond that, a cost efficient compensation of the structural, personnel and logistic expenses has to be assured.
Pre-alignment filters are useful for reducing the computational requirements of genomic sequence mappers. Most of them are based on estimating or computing the edit distance between sequences and their candidate locations in a reference genome using a subset of the dynamic programming table used to compute Levenshtein distance. Some of their FPGA implementations of use classic HDL toolchains, thus limiting their portability. Currently, most FPGA accelerators offered by heterogeneous cloud providers support C/C++ HLS. In this work, we implement and optimize several state-of-the-art pre-alignment filters using C/C++ based-HLS to expand their portability to a wide range of systems supporting the OpenCL runtime. Moreover, we perform a complete analysis of the performance and accuracy of the filters and analyze the implications of the results. The maximum throughput obtained by an exact filter is 95.1 MPairs/s including memory transfers using 100 bp sequences, which is the highest ever reported for a comparable system and more than two times faster than previous HDL-based results. The best energy efficiency obtained from the accelerator (not considering host CPU) is 2.1 MPairs/J, more than one order of magnitude higher than other accelerator-based comparable approaches from the state of the art. ; 10.13039/501100008530-European Union Regional Development Fund (ERDF) within the framework of the ERDF Operational Program of Catalonia 2014-2020 with a grant of 50% of the total cost eligible under the Designing RISC-V based Accelerators for next generation computers project (DRAC) (Grant Number: [001-P-001723]) 10.13039/501100002809-Catalan Government (Grant Number: 2017-SGR-313 and 2017-SGR-1624) 10.13039/501100004837-Spanish Ministry of Science, Innovation and Universities (Grant Number: PID2020-113614RB-C21 and RTI2018-095209-B-C22) ; Peer Reviewed ; Postprint (published version)
This paper examines trends in mental health among adults in England during the period of economic recovery and austerity following the 2008 'great recession'. We report analysis of data on 17,212 individuals living in England, from the longitudinal Understanding Society Survey (USS). We examined how individual's self-reported mental health over time (2011–2017), related to their changing socio-geographical status. Self-reported mental health is reported in the USS using version 2 of the SF12 Mental Component Summary. Trends in this score (across 5 observations per subject) were categorised into Mental Health Trajectory Groups (MHTGs) using Group Based Trajectory Modelling. We used maximum-likelihood multinomial logit models to estimate for individuals the relative likelihood of belonging to different Mental Health Trajectory categories as compared with a 'base' category, for whom mental health was good and stable throughout the period. We focus on likelihood of belonging to a group showing 'declining' mental health. Predictor variables included individuals' attributes and area conditions in their places of residence (including Office of National Statistics indicators of local employment deprivation and data on average income loss within districts due to welfare benefit reforms, published by the Centre of Regional Economic and Social Research at Sheffield Hallam University, UK). Our results emphasise the multiple socio-geographical 'determinants' likely to be operating on individual mental health. Declining mental health was associated both with conditions at the start of the study period and with social and socio-geographical mobility by the end of the study period. Risks of declining mental health were significantly greater for more deprived individuals and also (controlling for individual attributes) among those living in English neighbourhoods that were already economically disadvantaged at the beginning of the 'great recession' and located in districts where average incomes were most severely impacted by the effects of governmental austerity programmes on welfare benefits.
Normal-distribution-based maximum likelihood (ML) and multiple imputation (MI) are the two major procedures for missing data analysis. This article compares the two procedures with respects to bias and efficiency of parameter estimates. It also compares formula-based standard errors (SEs) for each procedure against the corresponding empirical SEs. The results indicate that parameter estimates by MI tend to be less efficient than those by ML; and the estimates of variance -covariance parameters by MI are also more biased. In particular, when the population for the observed variables possesses heavy tails, estimates of variance -covariance parameters by MI may contain severe bias even at relative large sample sizes. Although performing a lot better, ML parameter estimates may also contain substantial bias at smaller sample sizes. The results also indicate that, when the underlying population is close to normally distributed, SEs based on the sandwich-type covariance matrix and those based on the observed information matrix are very comparable to empirical SEs with either ML or MI. When the underlying distribution has heavier tails, SEs based on the sandwich-type covariance matrix for ML estimates are more reliable than those based on the observed information matrix. Both empirical results and analysis show that neither SEs based on the observed information matrix nor those based on the sandwich-type covariance matrix can provide consistent SEs in MI. Thus, ML is preferable to MI in practice, although parameter estimates by MI might still be consistent.