IMproving PRedictions and management of hydrological EXtremes (IMPREX) was a European Union Horizon 2020 project that ran from September 2015 to September 2019. IMPREX aimed to improve society's ability to anticipate and respond to future extreme hydrological events in Europe across a variety of uses in the water-related sectors (flood forecasting, drought risk assessment, agriculture, navigation, hydropower and water supply utilities). Through the engagement with stakeholders and continuous feedback between model outputs and water applications, progress was achieved in better understanding the way hydrological predictions can be useful to (and operationally incorporated into) problem-solving in the water sector. The work and discussions carried out during the project nurtured further reflections toward a common vision for hydrological prediction. In this article, we summarized the main findings of the IMPREX project within a broader overview of hydrological prediction, providing a vision for improving such predictions. In so doing, we first presented a synopsis of hydrological and weather forecasting, with a focus on medium-range to seasonal scales of prediction for increased preparedness. Second, the lessons learned from IMPREX were discussed. The key findings were the gaps highlighted in the global observing system of the hydrological cycle, the degree of accuracy of hydrological models and the techniques of post-processing to correct biases, the origin of seasonal hydrological skill in Europe and user requirements of hydrometeorological forecasts to ensure their appropriate use in decision-making models and practices. Last, a vision for how to improve these forecast systems/products in the future was expounded, including advancing numerical weather and hydrological models, improved earth monitoring and more frequent interaction between forecasters and users to tailor the forecasts to applications. We conclude that if these improvements can be implemented in the coming years, earth system and hydrological modelling will become more skillful, thus leading to socioeconomic benefits for the citizens of Europe and beyond.
Nowadays social media like Twitter and Facebook etc. is one of the key players. Twitters are micro blogging sites by which users sent their opinions and views in brief. The information generated by one user can be seen by everyone. Therefore to analyze twitter sentiment can be a crucial task. For this task, we have used various approaches like novel based approach and machine learning and many other rules like context awareness are used for the detection of public opinion and prediction of results. We are studying the user tweets during elections. Meaningful tweets are collected on a definite period.The feasibility of the developed classification model is identified by our proposed work to identify the political orientation on the tweets and other user-based features. The technique for the collection of tweets in time has played an important role. When the outcome of applied technique competes with survey agencies result was published before elections result.
The continuing development of prediction markets is important because of their success in foretelling the future in politics, economics, and science. In this article, we identify the expressive elements inherent in prediction markets and explore how legislation such as the Unlawful Internet Gambling Enforcement Act of 2006 might harm such predictive speech. This article is the first to explore First Amendment protections for prediction markets in such depth, and in so doing, we distinguish prediction markets from other regulated areas such as gambling and securities trading. The article's examination of prediction markets also illustrates the limitations of current commercial speech doctrine. We conclude by discussing how the executive, legislative, and judicial branches might resolve the First Amendment challenges of regulating prediction markets, and we propose a new legal test, modeled on existing free speech jurisprudence, which may assist courts in adjudicating any constitutional challenges.
29 páginas, 4 figuras, 3 tablas.-- Trabajo presentado al "Eighth International Conference Symmetry in Nonlinear Mathematical Physics" celebrado en Ucrania en 2009. ; All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) which not only realise an old field theoretic dream but also have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensionless couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory too. Based on the above theoretical framework phenomenologically consistent FUTS have been constructed. Here we present FUT models based on the SU(5) and SU(3)3 gauge groups and their predictions. Of particular interest is the Higgs mass prediction of one of the models which is expected to be tested at the LHC. ; This work is partially supported by the NTUA's basic research support programme 2008 and 2009, and the European Union's RTN programme under contract MRTNCT- 2006-035505. Supported also by a mexican PAPIIT grant IN112709, and by Conacyt grants 82291 and 51554-F. ; Peer reviewed
Interest in prediction markets has increased in the last decade, driven in part by the hope that these markets will prove to be valuable tools in forecasting, decision-making and risk management - in both the public and private sectors. This paper outlines five open questions in the literature, and we argue that resolving these questions is crucial to determining whether current optimism about prediction markets will be realized.
The dissertation is entitled: "Essays on information and prediction" and consists of a total of four chapters. The central issue is the study of the role that certain mechanisms of aggregation of information, such as predictive markets or elections, can play in improving the accuracy of predictive algorithms. The first chapter of the thesis is entitled: "Economic Determinants of Political Alternation: A Panel Data Analysis of OECD Countries" and studies the relationship between economic performance and political alternation. The initial hypothesis is that the stochastic process that determines the political alternation is not independent of the economy, so that the recent evolution of the macroeconomic variables would be very relevant when explaining the electoral results. We are implicitly considering the "responsibility hypothesis", by which voters are aware of the economic information since they consider that the government is responsible, through its actions, for the good or bad economic situation of the country. In this way, economic variables can predict the probability of political alternation. Throughout the chapter, a critical review of the literature is presented, with special attention to the article Brender & Drazen (2008) and later the estimation of a re-election probability model is proposed, using macroeconomic indicators. The contributions of this chapter to literature are diverse. On the one hand, the results obtained contradict those found in the "economic vote" literature. On the other hand, there are also methodological contributions: the use of a discount rate to allow voters to give more weight to the most recent data, propose an alternative way of measuring political alternation and the use of structural deficit data to solve the problem of multicollinearity between the independent variables.
Prediction markets, in which contract prices are used to forecast future events, are increasingly applied to various domains ranging from political contests to scientific breakthroughs. However, the dynamics of such markets are not well understood. Here, we study the return dynamics of the oldest, most data-rich prediction markets, the Iowa Electronic Presidential Election "winner-takes-all" markets. As with other financial markets, we find uncorrelated returns, power-law decaying volatility correlations, and, usually, power-law decaying distributions of returns. However, unlike other financial markets, we find conditional diverging volatilities as the contract settlement date approaches. We propose a dynamic binary option model that captures all features of the empirical data and can potentially provide a tool with which one may extract true information events from a price time series.
In the light of the scholarly standards development, research methods and their implications of theoretical literature, this study aims at answering the focal question suggested here: How far is prediction manifested in international relations? This study also throws light on the ongoing controversy of prediction in the context of international relations. Such debate will be unveiled through descriptive and analytical methods which in turn place an adequate emphasis on examining the controversial and theoretical issues raised by the concern of prediction. Relevant examples in the same context will be also given such as: the predictions suggested by the New Realism, Structuralism, and how far such theories recognize the international policies. The pragmatic implications dwells on the concept of prediction in international relations through identifying prediction's standards and its indications, the appropriate framework to make up theory, implications fields, assumptions prioritization, and the relation between the theoretical concepts to come up with a theory of valid for prediction in the future. The study concludes that the perpetual change of international system's frameworks brings about an urgent use of the methodological signs of prediction to strike a balance and maintain stability in the international environment. Such state of balance gives room for the hegemonic international terms and world peace since prediction phenomenon has been manipulated by scholarly research and methodological tactics.
The availability of real-time information on sea ice conditions in ice covered seas has always been important, not only to strategic military operations, but to the economies of those countries that border the Arctic and its marginal seas. Knowledge of the thickness and movement of sea ice as well as the locations of open water is required for traversing the Arctic whether in a drill ship, in a cargo vessel or in an ice strengthened ship such as a Coast Guard ice breaker. ; This work has been funded through the Office of Naval Research's Navy Ocean Modeling and Prediction Program (program element 602435), the Office of Naval Research's High Latitude Dynamics Program (program element 61153) and the Naval Space and Warfare Systems Command (program element 603207N).
Trabajo presentado en la IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), celebrada en Seattle (Estados Unidos) del 13 al 19 de junio de 2020 ; The problem of predicting human motion given a sequence of past observations is at the core of many applications in robotics and computer vision. Current state-of-the-art formulate this problem as a sequence-to-sequence task, in which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that predicts future movements, typically in the order of 1 to 2 seconds. However, one aspect that has been obviated so far, is the fact that human motion is inherently driven by interactions with objects and/or other humans in the environment. In this paper, we explore this scenario using a novel context-aware motion prediction architecture. We use a semantic-graph model where the nodes parameterize the human and objects in the scene and the edges their mutual interactions. These interactions are iteratively learned through a graph attention layer, fed with the past observations, which now include both object and human body motions. Once this semantic graph is learned, we inject it to a standard RNN to predict future movements of the human/s and object/s. We consider two variants of our architecture, either freezing the contextual interactions in the future of updating them. A thorough evaluation in the "Whole-Body Human Motion Database" [29] shows that in both cases, our context-aware networks clearly outperform baselines in which the context information is not considered. ; This work has been partially funded by the Spanish government under projects HuMoUR TIN2017-90086-R, ERA-Net Chistera project IPALM PCI2019-103386 and Mar´ıa de Maeztu Seal of Excellence MDM-2016-0656. We also thank Nvidia for hardware donation.
Tradizionalmente, l'epidemiologia descrittiva viene considerata come un semplice strumento esplorativo. Tuttavia, nel corso degli anni, la maggiore disponibilità e il miglioramento della qualità dei dati epidemiologici hanno portato allo sviluppo di nuove tecniche statistiche che caratterizzano l'epidemiologia moderna. Questi metodi non sono solo esplicativi, ma anche predittivi. In ambito di sanità pubblica, le previsioni degli andamenti futuri di morbilità e mortalità sono essenziali per valutare le strategie di prevenzione, la gestione delle malattie e per pianificare l'allocazione delle risorse. Durante il mio dottorato di ricerca in "Epidemiologia, Ambiente e Sanità Pubblica" ho lavorato all'analisi degli andamenti di mortalità per tumore, utilizzando principalmente la banca dati della World Health Organization (WHO), ma anche quella della Pan American Health Organization, dell'Eurostat, della United Nation Population Division, dello United States Census Bureau e la banca dati del Japanese National Institute of Population. Considerando diversi siti neoplastici e diversi paesi nel mondo, ho calcolato i tassi specifici per ogni classe di età quinquennale (da 0-4 a 80+ o 85+ anni), e singolo anno di calendario o quinquennio. Per poter confrontare i tassi fra diversi paesi, ho calcolato, utilizzando il metodo diretto sulla base della popolazione mondiale standard, i tassi di mortalità standardizzati per età per 100.000 anni-persona. Nella maggior parte delle analisi, ho poi applicato il modello di regressione joinpoint ai tassi standardizzati con lo scopo di individuare gli anni in cui erano avvenuti cambiamenti significativi nell'andamento dei tassi; per ogni segmento individuato dalla regressione joinpoint, ho calcolato le variazioni percentuali annue. Inoltre, mi sono concentrata sulle proiezioni degli andamenti futuri. Con l'obiettivo di individuare il segmento più recente dell'andamento di mortalità, ho applicato il modello di regressione joinpoint al numero di morti in ogni gruppo di età quinquennale. Quindi, ho utilizzato i Modelli Lineari Generalizzati (GLM), scegliendo la distribuzione di Poisson e diverse funzioni link, sui dati dell'ultimo segmento individuato dal modello joinpoint. In particolare, ho considerato le funzioni link identità, logaritmica, quinta potenza e radice quadrata. Ho anche implementato un algoritmo che genera una regressione "ibrida"; questo algoritmo seleziona automaticamente, in base al valore della statistica Akaike Information Criterion (AIC), il modello GLM Poisson più performante, tra quelli generati dalle funzioni link di identità, logaritmica, quinta potenza e radice quadrata, da applicare a ciascuna classe di età quinquennale. La regressione risultante, sull'insieme dei singoli gruppi di età, è quindi una combinazione dei modelli considerati. Quindi, applicando i coefficienti ottenuti dalle quattro regressioni GLM Poisson e dalla regressione ibrida sugli anni di previsione, ho ottenuto le stime predette del numero di morti. A seguire, utilizzando il numero di morti predetto e le popolazioni predette, ho stimato i tassi previsti specifici per età e i corrispondenti intervalli di previsione al 95% (PI). Infine, come ulteriore modello di confronto, ho costruito un modello medio, che semplicemente calcola una media delle stime prodotte dai diversi modelli GLM Poisson. Al fine di confrontare fra loro i sei diversi metodi di previsione, ho utilizzato i dati relativi a 21 paesi in tutto il mondo e all'Unione Europea nel suo complesso, e ho considerato 25 maggiori cause di morte. Ho selezionato solo i paesi con oltre 5 milioni di abitanti e solo i paesi per i quali erano disponibili dati di buona qualità (ovvero con almeno il 90% di coverage). Ho analizzato i dati del periodo temporale compreso tra il 1980 e il 2011 e, in particolare, ho applicato i vari modelli sui dati dal 1980 al 2001 con l'idea di prevedere i tassi sul periodo 2002-2011, e ho poi utilizzato i dati effettivamente disponibili dal 2002 al 2011 per valutare le stime predette. Quindi, per misurare l'accuratezza predittiva dei diversi metodi, ho calcolato la deviazione relativa assoluta media (AARD). Questa quantità indica la deviazione media percentuale del valore stimato dal valore vero. Ho calcolato gli AARD su un periodo di previsione di 5 anni (i.e. 2002-2006), e anche su un periodo di 10 anni (i.e. 2002-2011). Dalle analisi è emerso che il modello ibrido non sempre forniva le migliori stime di previsione e, anche quando risultava il migliore, i corrispondenti valori di AARD non erano poi molto lontani da quelli degli altri metodi. Tuttavia, le proiezioni ottenute utilizzando il modello ibrido, per qualsiasi combinazione di sito di tumore e sesso, non sono mai risultate le peggiori. Questo modello è una sorta di compromesso tra le quattro funzioni link considerate. Anche il modello medio fornisce stime intermedie rispetto alle altre regressioni: non è mai risultato il miglior metodo di previsione, ma i suoi AARD erano competitivi rispetto agli altri metodi considerati. Complessivamente, il modello che mostra le migliori prestazioni predittive è il GLM Poisson con funzione link identità. Inoltre, questo metodo ha mostrato AARD estremamente bassi rispetto agli altri metodi, in particolare considerando un periodo di proiezione di 10 anni. Infine, bisogna tenere in considerazione che gli andamenti previsti, e i corrispondenti AARD, ottenuti da proiezioni su periodi di 5 anni sono molto più accurati rispetto a quelli su periodi di 10 anni. Le proiezioni ottenute con questi metodi per periodi superiori a 5 anni perdono in affidabilità e la loro utilità in sanità pubblica risulta quindi limitata. Durante l'implementazione della regressione ibrida e durante le analisi sono rimaste aperte alcune questioni: ci sono altri modelli rilevanti che possono essere aggiunti all'algoritmo? In che misura la regressione joinpoint influenza le proiezioni? Come trovare una regola "a priori" che aiuti a scegliere quale metodo predittivo applicare in base alle varie covariate disponibili? Tutte queste domande saranno tenute in considerazione per gli sviluppi futuri del progetto. Prevedere gli andamenti futuri è un processo complesso, le stime risultanti dovrebbero quindi essere considerate con cautela e solo come indicazioni generali in ambito epidemiologico e di pianificazione sanitaria. ; Descriptive epidemiology has traditionally only been concerned with the definition of a research problem's scope. However, the greater availability and improvement of epidemiological data over the years has led to the development of new statistical techniques that have characterized modern epidemiology. These methods are not only explanatory, but also predictive. In public health, predictions of future morbidity and mortality trends are essential to evaluate strategies for disease prevention and management, and to plan the allocation of resources. During my PhD at the school of "Epidemiology, Environment and Public Health" I worked on the analysis of cancer mortality trends, using data from the World Health Organization (WHO) database, available on electronic support (WHOSIS), and from other databases, including the Pan American Health Organization database, the Eurostat database, the United Nation Population Division database, the United States Census Bureau and the Japanese National Institute of Population database. Considering several cancer sites and several countries worldwide, I computed age-specific rates for each 5-year age-group (from 0–4 to 80+ or 85+ years) and calendar year or quinquennium. I then computed age-standardized mortality rates per 100,000 person-years using the direct method on the basis of the world standard population. I performed joinpoint models in order to identify the years when significant changes in trends occurred and I calculated the corresponding annual percent changes. Moreover, I focused on projections. I fitted joinpoint models to the numbers of certified deaths in each 5-year age-group in order to identify the most recent trend slope. Then, I applied Generalized Liner Model (GLM) Poisson regressions, considering different link functions, to the data over the time period identified by the joinpoint model. In particular, I considered the identity link, the logarithmic link, the power five link and the square root link. I also implemented an algorithm that generated a "hybrid" regression; this algorithm automatically selects the best fitting GLM Poisson model, among the identity, logarithmic, power five, and square root link functions, to apply for each age-group according to Akaike Information Criterion (AIC) values. The resulting regression is a combination of the considered models. Thus, I computed the predicted age-specific numbers of deaths and rates, and the corresponding 95% prediction intervals (PIs) using the regression coefficients obtained previously from the four GLM Poisson regressions and from the hybrid GLM Poisson regression. Lastly, as a further comparison model, I implemented an average model, which just computes a mean of the estimates produced by the different considered GLM Poisson models. In order to compare the six different prediction methods, I used data from 21 countries worldwide and for the European Union as a whole, I considered 25 major causes of death. I selected countries with over 5 million inhabitants and with good quality data (i.e. with at least 90% of coverage). I analysed data for the period between 1980 and 2011 and, in particular, I considered data from 1980 to 2001 as a training dataset, and from 2002 to 2011 as a validation set. To measure the predictive accuracy of the different models, I computed the average absolute relative deviations (AARDs). These indicate the average percent deviation from the true value. I calculated AARDs on 5-year prediction period (i.e. 2002-2006), as well as for 10-year period (i.e. 2002-2011). The results showed that the hybrid model did not give always the best predictions, and when it was the best, the corresponding AARD estimates were not very far from the other methods. However, the hybrid model projections, for any combination of cancer site and sex, were never the worst. It acted as a compromise between the four considered models. The average model is also ranked in an intermediate position: it never was the best predictive method, but its AARDs were competitive compared to the other methods considered. Overall, the method that shows the best predictive performance is the Poisson GLM with an identity link function. Furthermore, this method, showed extremely low AARDs compared to other methods, particularly when I considered a 10-year projection period. Finally, we must take into account that predicted trends and corresponding AARDs derived from 5-year projections are much more accurate than those done over a 10-year period. Projections beyond five years with these methods lack reliability and become of limited use in public health. During the implementation of the algorithm and the analyses, several questions emerged: Are there other relevant models that can be added to the algorithm? How much does the Joinpoint regression influence projections? How to find an "a priori" rule that helps in choosing which predictive method apply according to various available covariates? All these questions are set aside for the future developments of the project. Prediction of future trends is a complex procedure, the resulting estimates should be taken with caution and considered only as general indications for epidemiology and health planning.
Dr. Chris Gilbert, M.D., Ph.D President and COO of Discovery Democracy LLC Dr. Eric Haseltine, Ph.D. Chairman of the Board of the US Technology Leadership Council Former CTO of the US Intelligence Community Former Director of Research at NSA Lecture Sponsor: Louisiana Tech University
IMproving PRedictions and management of hydrological EXtremes (IMPREX) was a European Union Horizon 2020 project that ran from September 2015 to September 2019. IMPREX aimed to improve society's ability to anticipate and respond to future extreme hydrological events in Europe across a variety of uses in the water-related sectors (flood forecasting, drought risk assessment, agriculture, navigation, hydropower and water supply utilities). Through the engagement with stakeholders and continuous feedback between model outputs and water applications, progress was achieved in better understanding the way hydrological predictions can be useful to (and operationally incorporated into) problem-solving in the water sector. The work and discussions carried out during the project nurtured further reflections toward a common vision for hydrological prediction. In this article, we summarized the main findings of the IMPREX project within a broader overview of hydrological prediction, providing a vision for improving such predictions. In so doing, we first presented a synopsis of hydrological and weather forecasting, with a focus on medium-range to seasonal scales of prediction for increased preparedness. Second, the lessons learned from IMPREX were discussed. The key findings were the gaps highlighted in the global observing system of the hydrological cycle, the degree of accuracy of hydrological models and the techniques of post-processing to correct biases, the origin of seasonal hydrological skill in Europe and user requirements of hydrometeorological forecasts to ensure their appropriate use in decision-making models and practices. Last, a vision for how to improve these forecast systems/products in the future was expounded, including advancing numerical weather and hydrological models, improved earth monitoring and more frequent interaction between forecasters and users to tailor the forecasts to applications. We conclude that if these improvements can be implemented in the coming years, earth system and hydrological modelling will become more skillful, thus leading to socioeconomic benefits for the citizens of Europe and beyond.
IMproving PRedictions and management of hydrological EXtremes (IMPREX) was a European Union Horizon 2020 project that ran from September 2015 to September 2019. Its aim was to improve society's ability to anticipate and respond to future extreme hydrological events in Europe across a variety of uses in the water-related sectors (flood forecasting, drought risk assessment, agriculture, navigation, hydropower, and water supply utilities). Through the engagement with stakeholders and continuous feedback between model outputs and water applications, progress was achieved in better understanding the way hydrological predictions can be useful to (and operationally incorporated into) problem solving in the water sector. The work and discussions carried out during the project nurtured further reflections towards a common vision for hydrological prediction. In this article, we summarize the main findings of the IMPREX project within a broader overview of hydrological prediction, providing a vision for improving such predictions. In so doing, we firstly present a synopsis of hydrological and weather forecasting, with a focus on medium-range to seasonal scales of prediction for increased preparedness. Second, the lessons learnt from IMPREX are discussed. The key findings are the gaps highlighted in the global observing system of the hydrological cycle, the degree of accuracy of hydrological models and the techniques of post-processing to correct biases, the origin of seasonal hydrological skill in Europe, and user requirements of hydrometeorological forecasts to ensure their appropriate use in decision-making models and practices. Lastly, a vision for how to improve these forecast systems/products in the future is expounded and these include advancing numerical weather and hydrological models, improved earth monitoring, and more frequent interaction between forecasters and users to tailor the forecasts to applications. We conclude that if these improvements can be implemented in the coming years, earth system and ...
Este artículo presenta un estudio comparado de varios métodos de predicción de pausas, usando el mismo corpus etiquetado. Algunos métodos propuestos por otras publicaciones sobre el tema son probados, combinando algunas técnicas previas para aprovechar sus principales ventajas. Un nuevo método es propuesto modelando explícitamente la función densidad de probabilidad de la distancia entre pausas. Los resultados muestran que las técnicas basadas en datos ofrecen muy buenos resultados. ; This article presents a comparative study of several methods of phrase break prediction, using the same labeled corpora. Some previous methods proposed in the literature are tested, mixing techniques to take advantage of their benefits. It is proposed an approach based on explicitly modeling the probability density function of the distance between breaks. The results have shown that data-driven techniques provide very good results. ; This work has been partially sponsored by the Spanish Government under grant TIC2002-04447-C02.