Studio inerente al tema della manutenzione su condizione, applicata ad un impianto per la produzione di capsule in plastica, in collaborazione con il gruppo SACMI S.C., a Imola, all'interno della divisione Closures & Cointainers. Presentata la tematica della manutenzione degli impianti di produzione e la sua evoluzione negli ultimi decenni, si pone l'attenzione sulle attività aziendali e l'ambito in cui si è incentrato lo studio. Segue una trattazione teorica del tema della Big Data Analysis e di come questa può essere sfruttata in termini di manutenzione predittiva e stima della RUL di un un particolare componente, concentrandosi sulle tecniche più recenti applicate in merito. Successivamente viene spiegato nel dettaglio il funzionamento della linea produttiva esaminata ed è affrontata la gestione del database di dati raw raccolti dai sensori presenti sull'impianto esaminato. Dopo averli presentati, per i diversi gruppi funzionali della linea presa in esame sono stati generati grafici su un periodo di nove mesi di produzione presso un cliente, andando a valutare: condizioni di funzionamento, possibili anomalie, trend che giustificassero usura o malfunzionamento di uno o più componenti. I risultati sono stati, quando possibile, confrontati con i report manutentivi reali, per avere conferme della robustezza dell'informazione generata dai sensori, in un'ottica che ha come possibile obiettivo quello di porre le basi per un futuro sistema di manutenzione su condizione.
Starting from the pioneering papers by Charnes, Cooper and Rhodes (CCR model) and Banker, Charnes and Cooper (BCC model), a large number of papers concerning Data Envelopment Analysis (DEA) with outputs uncertainty appeared in the literature. In particular, chance-constrained programming is the most used technique to include noise variations in data and to solve data envelopment analysis problems with uncertainty in data. Chance-constrained programming admits random data variations and permits constraint violations up to specified probability limits, allowing linear deterministic equivalent formulations in case a normal distribution of the data uncertainty is assumed. The standard DEA models rely on the assumption that inputs are minimized and outputs are maximized. However, both desirable and undesirable (e.g., pollutants or wastes) output factors may be present. The undesirable and desirable outputs should be treated differently when we evaluate the production performance: if inefficiency exists in the production, the undesirable pollutants should be reduced to improve efficiency. In order to include undesirable factors in DEA models, according to the literature, two different approaches can be used to model undesirable factors: one group of DEA models treats them as inputs, whereas a second group considers them as undesirable outputs. DEA models with undesirable factors are particularly suitable for models where several production inputs and desirable and undesirable outputs are taken into account, in order to provide an eco-efficiency measure. In this Ph.D thesis alternative DEA models, which consider both uncertain and undesirable outputs, are proposed and studied. In particular, in the first part of this thesis two different models with uncertain outputs and deterministic inputs are proposed with the aim to move away the classical chance-constrained method and to obtain a more accurate DMU ranking whatever situation occurs. Specifically speaking, the proposed models remove the hypothesis of normal data distribution and use a scenario generation approach to include data perturbations. For the sake of completeness, these models are compared with two further ones based on an expected value approach, where uncertainty is managed by means of the expected values of random factors both in the objective function and in the constraints. Deeply speaking, the main difference between the two proposed models and the expected value approaches lies in their mathematical formulation. In the new models, based on the scenario generation approach, the constraints concerning efficiency level are expressed for each scenario. On the other hand, in the expected value models the constraints are satisfied in expected value. As a consequence, the models proposed in the thesis result to be more selective in finding a ranking of efficiency, thus becoming useful strategic management tools aimed to determine a restrictive efficiency score ranking. In the second part of this study, we focus on environmental policy and eco-efficiency. Nowadays, one of the most intensively discussed concepts in the international political debate is, in fact, the concept of sustainability and the need for eco-efficient solutions that enable the production of goods and services with less energy and resources and with less waste and emissions (eco-efficiency). In particular, we consider the environmental impact of CO2 in cement and clinker production processes. Cement industry is, in fact, responsible for approximately 5% of the current worldwide CO2 emissions. DEA models can provide an appropriate methodological approach for developing eco-efficiency indicators. A cross-country comparison of the eco-efficiency level of the worldwide cement industry is presented by applying both a data envelopment analysis and a directional distance function approach. These tools result to be particularly suitable for models where several production inputs and desirable and undesirable outputs are taken into account. Strong and weak disposability assumptions are analyzed in order to evaluate the impact of environmental regulations interpreted as the cost of regulation. The few papers appeared in the literature of eco-efficiency in cement production analyze the emission performance trends only from an interstate point of view. In this thesis a worldwide study has been carried on, covering 90% of the world's cement production by means of 21 countries, European (EU) and non-European (non-EU) ones. The obtained results show that the efficiency level mainly depends on decisions to invest in alternative raw materials and alternative fuels, both in the case of regulated countries and in the case of voluntary emission-trading schemes. This study highlights, both at national and international levels, the possibility of reducing CO2 emissions and expanding cement production. The use of alternative raw materials, alternative fuels and the possibility of producing blended cements, which require less energy consumption and reduce pollutant emissions, seem to be appropriate means. Environmental regulations can provide incentives in terms of tax exemption benefits or more restrictive pollutant limits. Finally, we try to answer to the following questions: do undesirable factors modify the efficiency levels of cement industry? Is it reasonable to omit CO2 emissions in evaluating the performances of the cement sector in different countries? In order to answer to these questions, alternative formulations of standard data envelopment analysis model and directional distance function are compared both in presence and in absence of undesirable factors. This analysis shows that the presence of undesirable factors greatly affects efficiency levels. Efficiency levels are influenced by investments in best available technologies and by the utilization of alternative fuels and raw materials in cement and clinker production processes. The original results of this Ph.D. thesis have been collected in the following research papers: • Riccardi R. and R. Toninelli. Data Envelopment Analysis with outputs uncertainty. Journal of Information & Optimization Sciences, to appear. • Riccardi R., Oggioni G. and R. Toninelli. The cement industry: eco-efficiency country comparison using Data Envelopment Analysis. Journal of Statistics & Management Systems, accepted for publication. • Riccardi R., Oggioni G. and R. Toninelli. Eco-efficiency of the world cement industry: A Data Envelopment Analysis. Energy Policy, Vol. 39, Issue 5, p. 2842-2854, 2011, available online at: http://dx.doi.org/10.1016/j.enpol.2011.02.057 • Riccardi R., Oggioni G. and R. Toninelli. Evaluating the efficiency of the cement sector in presence of undesirable output: a world based Data Envelopment Analysis. Technical Report n. 344, Department of Statistics and Applied Mathematics, University of Pisa, 2011, submitted to Resource and Energy Economics. The research topic considered in this thesis shows many different lines for future developments. In particular, from a theoretical point of view, starting from the models proposed in Riccardi and Toninelli (2011), we are studying for a bi-objective like DEA formulation where both uncertainty desirable and undesirable factor are taken into account. As regards the applicative aspects, we are also studying and applying bootstrap techniques to manage uncertainty and generate empirical distributions of efficiency scores, in order to capture and analyze the sensitivity of samples with respect to changes in the estimated frontier.
In this introduction, which is not a survey, but instead a selective & personal view of some approaches to the subject of the volume (international, or transnational, terrorism), the a., first of all, emphasizes the fact that in the foreign (mainly in the American) literature the topic has been thoroughly stressed since many years .The recent literature is briefly examined in the following respects: (1) the variety of the analytical perspectives, (2) the relevance of the islamic context, (3) the utilization of analytical tools peculiar to particular subjects. Adapted from the source document.
Following the psychol'ts & econ'ts, pol'al sci'ts have focused their attention on studies of the decision-making process. Sci'fic methods, analyzing the phenomenon, & making it possible to predict the outcome, have been set forth by diff investigators. The results are far from being satisfactory. In effect, it is only possible to make predictions when a minimum of knowledge, freedom & coherence presides over human decisions. If we pass from the individual to the collective, which is that of pol'al decisions, we see that the decisions of several individuals are not only `concurrent' but 'cooperative', for they tend to arrive at a single decision for the entire group. It is interesting to compare the individual choice of a consumer & the individual choice of a voter (the vote of the majority being the normal group means of arriving at a decision). Certain authors, Black in particular, do not see any diff in principle between them. Others, like Buchanan, see fundamental diff's: the consumer knows the immediate result of his choice, not the voter; the first wishes to satisfy his tastes, the latter to realize his values: above all the latter can be forced to accept a result contrary to his choice. Pol'al decisions, made in uncertainty, are thus exposed to coercion. Tr. by Z. Dana from IPSA.
This research presents a data set of the terrorist attacks that took place in Western countries between 11th September 2001 and 31st December 2016. The study begins by addressing both the conceptual problem of the definition of terrorism and its political significance, and the historical origins of Islamic or jihadist ter¬rorism. The data set used here is based on the University of Maryland's Global Terrorism Database, which has been integrated and extended using journalistic sources, scientific journals, and governmental and in¬stitutional websites. The general and specific traits of the terrorist phenomenon are investigated through multidimensional analysis of the data (AMD) and the analysis of their main components (ACP). The analysis throws into relief both the variety of the terrorist attacks and of the terrorist organizations involved, and in particular, how the modus operandi of the cells linked to Al-Qaida differ compared to those attached to other subjects or groups. ; Questa ricerca presenta una banca dati degli attacchi terroristici che hanno avuto luogo nei paesi occidentali dall'11 settembre 2001 al 31 dicembre 2016. Inizialmente, si affrontano il problema concettuale della definizione di terrorismo e del suo significato politico, nonché delle origini storiche del terrorismo islamico o jihadista. Il data-set qui impiegato è basato sul Global Terrorism Database dell'Università del Maryland, che è stato integrato ed esteso ricorrendo a fonti giornalistiche, riviste scientifiche, siti Web governativi e istituzionali. Attraverso l'analisi multidimensionale dei dati (AMD) e l'analisi dei loro componenti principali (ACP), vengono studiati i tratti generali e peculiari del fenomeno terroristico. L'analisi evidenzia la varietà degli attacchi terroristici e delle organizzazioni terroristiche. In particolare, sottolineiamo il diverso modus operandi delle cellule legate ad Al Qaida rispetto a quelle associate ad altri soggetti o gruppi.
'informazione costituisce, attualmente, una delle principali risorse economiche del processo produttivo e, di conseguenza, le attività di raccolta, selezione e monitoraggio dei dati personali assumono una fondamentale rilevanza poichè rivolte alla capillare targettizzazione e fidelizzazione del consumatore. In questa direzione, la privacy, da tutela individualistica del diritto ad essere lasciati soli, assume il significato di diritto al pieno controllo delle informazioni personali. L'analisi dei Big Data si spinge sino alla ricerca e all'esame di ogni possibile correlazione tra i dati e gli algoritmi impiegati nei processi decisionali. Se l'uso cooperativo e partecipato nella sfera pubblica di alcune tipologie di Big Data può rivestire un sicuro interesse sociale, in altri casi l'utilizzo di tali conoscenze solleva notevoli criticità concernenti la tutela dei dati personali, il loro sfruttamento economico, la presenza di un'effettiva consapevolezza e libertà nella manifestazione del consenso al loro trattamento. Ulteriori incognite, conseguite all'impiego dei Big Data, sono rappresentate dalla polarizzazione dell'informazione in capo a pochi intermediari digitali che palesa tutta l'asimmetria tra chi offre il servizio informativo e il suo fruitore, aggravata dall'intrasparenza e dalla selettività dei criteri posti alla base del funzionamento dell'algoritmo, problematiche che si riflettono sulla piena esplicazione dei diritti di libertà e sul futuro della democrazia. Si mostra, allora, tutta l'urgenza di un'efficace regolazione dei Big Data e, più in generale, delle informazioni personali circolanti online, ispirata ai valori costituzionalmente garantiti e diretta alla tutela dell'individuo dall'impiego improprio delle tecnologie informatiche, nella prospettiva di un'innovativa delineazione di modelli di cittadinanza digitale attiva, a fondamento di un'effettiva libertà di costruzione personale. ; Information is currently one of the main economic resources of the production process and, consequently, the collection, selection and monitoring of personal data assume a fundamental importance as they are aimed at capillary consumer targeting and retention. In this direction, privacy, from individualistic protection of the right to be left alone, takes on the meaning of the right to full control of personal information. The Big Data analysis goes as far as searching and examining any possible correlation between the data and the algorithms used in the decision-making processes. If the cooperative and participatory use in the public sphere of some types of Big Data can be of certain social interest, in other cases the use of such knowledge raises significant critical issues concerning the protection of personal data, their economic exploitation, the presence of an effective awareness and freedom in the manifestation of consent to their treatment. Further unknowns, resulting from the use of Big Data, are represented by the polarization of information by a few digital intermediaries that reveals all the asymmetry between those who offer the information service and its user, aggravated by the transparency and the selectivity of the criteria places at the base of the functioning of the algorithm, problems that are reflected on the full explanation of the rights of freedom and on the future of democracy. All the urgency of an effective regulation of Big Data and, more generally, of personal information circulating online, inspired by the constitutionally guaranteed values and directed to the protection of the individual from the improper use of information technology, is shown in the the prospect of an innovative delineation of models of active digital citizenship which are the foundation of an effective freedom of personal construction.
The article examines research data practices in the Humanities within the European area.The great attention paid to research data in European policies on research confirms their strategic role in the development and optimization of the scientific research.The analysis of some research infrastructures and projects focused on research data in the Humanities shows how the European Member States can act centrally in improving and developing them, in order to make efficient individual activities and to reduce the lack of homogeneity, and to support the presence of easily identifiable, usable and integrated systems. ; L'articolo indaga il tema dei research data nell'ambito delle discipline umanistiche, in area europea. Definendo tale tipologia di dati come risorsa primaria e fondamento dei risultati della ricerca scientifica, si sono individuate le specificità che la caratterizzano nel settore delle scienze umane. L'attenzione rivolta ai research data nelle politiche europee ne conferma il ruolo strategico per lo sviluppo e l'ottimizzazione del sistema della ricerca. L'osservazione di alcune realizzazioni di infrastrutture e progetti dedicati ai research data nelle discipline umanistiche, mostra l'importanza del ruolo degli Stati nel definire politiche volte a potenziare e valorizzare tali sistemi, per rendere efficienti le singole iniziative e limitare la disomogeneità, favorendo l'esistenza di sistemi facilmente individuabili, utilizzabili ed integrati tra loro. In questo contesto la posizione dell'Italia, fino ad oggi arretrata in relazione ai sistemi di raccolta, gestione, conservazione, condivisione dei dati risultanti dalla ricerca, sembra collocarsi in una prospettiva di sviluppo.
I dati formano il nostro "io" digitale e costituiscono il fondamento di ogni attività umana. Il governo della data society passa dal governo dei dati e il diritto è chiamato a disciplinare i volti assunti dai dati, dalle informazioni e dalla conoscenza nella contemporaneità e a tutelare i diritti coinvolti. In una realtà caratterizzata da amministrazioni aperte e cittadinanza digitale, il lavoro mira ad esaminare gli strumenti di conoscenza relativi alle diverse configurazioni dei dati, identificate nei closed data e nei relativi volti della trasparenza (proattiva e reattiva), negli open data e nei big data. L'analisi degli strumenti di conoscenza permette di comprendere le questioni che si pongono al diritto: le connessioni intricate di dati rivelano connessioni intricate di diritti, da bilanciare al fine di tutelare la persona e la società democratica. Il lavoro esamina la disciplina e le problematiche dei diritti maggiormente coinvolti nella data governance, in specifico right to know, identità, oblio, diritto d'autore e protezione dei dati personali, alla ricerca del bilanciamento tra gli stessi nelle diverse configurazioni assunte dai dati. In conclusione il lavoro arriva a suggerire un bilanciamento tra diritti nel governo dei dati basato sulla centralità della persona, in particolare sulla dignità e sullo sviluppo della stessa. La tutela dei diritti può basarsi su un approccio preventivo e tecnologico by default e by design e sull'accountability dei soggetti, immaginando soluzioni capaci di innovare i paradigmi tradizionali e minimizzare i rischi di asimmetria, controllo e sorveglianza, come gli open big data e forme di tutela collettiva; il rinnovamento del diritto passa da una costruzione di matrice globale e multistakeholder ed è guidato da un approccio etico. Il ruolo del diritto e la forza dei diritti sono necessari al governo della data society e alla tutela della persona nell'era digitale: ubi data society, ibi ius. ; The data form our digital self and constitute the foundation of every human activity. The government of data society passes from data governance and the law is called upon to regulate the faces assumed by data, information and knowledge in contemporary society and to protect the involved rights. In a reality characterized by open administrations and digital citizenship, this work aims to examine the tools of knowledge related to the different data configurations, identified in closed data and in the related faces of disclosure (proactive and reactive), in open data and in big data. The analysis of the tools of knowledge makes it possible to understand the issues that arise in law: the intricate connections of data reveal intricate connections of rights, which must be balanced in order to protect the individual and the democratic society. This work examines the discipline and the issues of the rights most involved in data governance, specifically right to know, identity, right to be forgotten, copyright and data protection. In conclusion, the work suggests a balance between rights in data governance based on the central role of the individual, in particular on the dignity and the development of the individual himself. The protection of the rights can be based on a preventive and technological approach by default and by design and on the accountability of the subjects, imagining solutions capable of minimizing the risks of asymmetry, control and surveillance, such as open big data and forms of collective protection; the renewal of law is based on a global and multistakeholder construction that is guided by an ethical approach. The role of law and the force of rights are necessary for the governance of data society and for the protection of the individual in the digital age: ubi data society, ibi ius.
This thesis aims to demonstrate in a tangible way how mobile phone data, private vehicle tracks, and scanner data are useful for measuring complex systems. The three main areas of application concerned use of Big Data: i) for measuring the presence within a territory through Data Mining techniques, ii) to now-casting socio-economic development of a country, and iii) for measuring the dynamics of cities. First, it has been developed a tool for real-time demography demonstrating how to use mobile phone data over a wide area to achieve a new Official Statistic indicators. The study showed how Big Data, either using mobile phone data or scanner data are useful and effective for carrying out a continuous census of the population. Second, it has been proposed an analytical framework able to evaluate relations between relevant aspects of human behavior and the well-being of a territory. We found out that the diversity of human mobility is a mirror of some aspects of socio-economic development and well-being. Then, we showed how mobility features help to improve the performance of state-of-the-art methodology such as small area estimation methodologies. Finally, it has been analyzed how mobility interacts with the territory due to the movement of people. We proposed to use mobile phone data and GPS tracks for city government measuring the attractiveness of cities. Furthermore, a data analysis approach aimed to identify mobility functional areas in a completely data-driven way has been proposed. The main findings of the thesis concern the statistical and ethical evaluation of results with official sources and showed that methodologies could be applied in other contexts and with different data sources as well. We showed how the geographic information contained in the data sources is incredibly useful to observe our society with a new ``microscope''. Thanks to the opportunity provided by the varied scientific context of SoBigData, the European Research Infrastructure for Big Data and Social Mining. the Ph.D. also contributed to develop and promote responsible data science because the ethical framework is considered as part of the CRISP model, not a problem to treat apart.
In this work we will discuss about a project started by the Emilia-Romagna Regional Government regarding the manage of the public transport. In particular we will perform a data mining analysis on the data-set of this project. After introducing the Weka software used to make our analysis, we will discover the most useful data mining techniques and algorithms; and we will show how these results can be used to violate the privacy of the same public transport operators. At the end, despite is off topic of this work, we will spend also a few words about how it's possible to prevent this kind of attack.
The thesis provides an empirical analysis relative to the estimation of the effects of fiscal policy on GDP. The research is conducted through the the study of the causal relationships that emerge from the implementation of some Structural Vector-Autoregressive models, which are estimated with U.S. time series data. Although Fiscal multipliers are the important tools that are able to capture the effect on GDP of one dollar spent by the State in increasing government expenditure or in reducing Taxation, economists until now have not reached a consensus about their size (and even their sign). For this reason one of the main topic on which the thesis is concentrated is the computation of fiscal multipliers. Three are the main innovative aspects of the work, respect to the literature of the empirical fiscal studies. The first one is the data-driven approach that characterises all the aspects of the work. In particular, differently from the majority of researches in the literature, the "identification" of fiscal shocks relies completely on a data-driven procedure (The Independent Component Analysis). This important innovation permits to detect the causal effects of fiscal policy relying exclusively on the properties of the data, avoiding in this way the imposition of some restrictions that are driven by some ex-ante economic knowledge. The second point of differentiation from a standard fiscal SVAR analysis is the completely new interpretation of what is called a "fiscal shock". The Fiscal shock is considered in the work as a unique mixture of exogenous and interrelated measures on Taxes and Government expenditure at the same time. In fact, given the evidence of some recent studies about how the fiscal policy is practically designed and implemented, the common assumption about the existence of two separate and independent fiscal shocks, one on taxes and one on government expenditure, is rejected in this work. Finally the results of government expenditure and Tax multipliers computed in the data-driven environment of the work result to be positive (Government expenditure also above the unity) and even persistent 2 years after the shock, something that is not very common in the literature.
Questa dissertazione si focalizza sulla fase di perfezionamento dei documenti strutturati in formato Akoma Ntoso, un dialetto XML per documenti legali in grado di adattarsi alle differenti necessità di ogni sistema legislativo. Tale processo serve ai redattori legali per raffinare i documenti creati tramite convertitori automatici, i quali non sono esenti da errori. Viene presentata quindi la creazione di Akomando-Create, una libreria javascript che fornisce tramite API le funzionalità utili alla realizzazione di strumenti software specializzati nella correzione di documenti Akoma Ntoso. Utilizzando questa libreria, è stata realizzata un'applicazione web denominata ANANAS (Akoma Ntoso Analysis of Names And Semantics), la quale permette di accedere ad una collezione di documenti al fine di illustrare tramite grafici la loro bontà semantica e di performare semplici operazioni di correzione dei metadati.