This work wants to propose measurement methodologies and solutions for tackling the energy poverty and affordability issue in developed countries, focusing on the European Union and in particular on Spain and Catalonia. The research is carried out as a support tool for policy makers and public authorities, providing an objective and scientific evaluation of a problem which is currently at the centre of both the political and economic debate. Two are the aims of this project. First aim is to analyse and test, on a real database, all the indicators used throughout Europe so far. This will lead to the choice of a suitable indicator that could be applied to Spain for assessing and estimating the energy poverty extension and impact over Spanish society. Second aim, based on previous step, is to model the phenomenon in an innovative manner using machine learning instruments. This will allow to understand what are the variables that increase the risk for a single households of facing an energy vulnerability situation. As a core added value, the analysis will not take into account information that are commonly owned by private utility companies. In the final part of the project the results obtained from the trained model are applied and tuned to a specific study case: the city of Barcelona. An energy vulnerability ranking will order all city neighbourhoods according to their probability of hosting families in energy deprivation conditions. Moreover, it will be possible to evaluate the drivers of the problem case by case. This outcomes can set the base for the implementation of effective policies following a specific and demonstrated framework and order of action, optimizing and controlling the use of public financial resources.
This work wants to propose measurement methodologies and solutions for tackling the energy poverty and affordability issue in developed countries, focusing on the European Union and in particular on Spain and Catalonia. The research is carried out as a support tool for policy makers and public authorities, providing an objective and scientific evaluation of a problem which is currently at the centre of both the political and economic debate. Two are the aims of this project. First aim is to analyse and test, on a real database, all the indicators used throughout Europe so far. This will lead to the choice of a suitable indicator that could be applied to Spain for assessing and estimating the energy poverty extension and impact over Spanish society. Second aim, based on previous step, is to model the phenomenon in an innovative manner using machine learning instruments. This will allow to understand what are the variables that increase the risk for a single households of facing an energy vulnerability situation. As a core added value, the analysis will not take into account information that are commonly owned by private utility companies. In the final part of the project the results obtained from the trained model are applied and tuned to a specific study case: the city of Barcelona. An energy vulnerability ranking will order all city neighbourhoods according to their probability of hosting families in energy deprivation conditions. Moreover, it will be possible to evaluate the drivers of the problem case by case. This outcomes can set the base for the implementation of effective policies following a specific and demonstrated framework and order of action, optimizing and controlling the use of public financial resources.
Biodiversity is an ecological concept, which essentially involves a complex sum of several indicators. One widely accepted such set of indicators is prescribed for habitat conservation status assessment within Natura 2000, a continental-scale conservation programme of the European Union. Essential Biodiversity Variables are a set of indicators designed to be relevant for biodiversity and suitable for global-scale operational monitoring. Here we revisit a study of Natura 2000 conservation status mapping via airbone LIDAR that develops individual remote sensing-derived proxies for every parameter required by the Natura 2000 manual, from the perspective of developing regional-scale Essential Biodiversity Variables. Based on leaf-on and leaf-off point clouds (10 pt/m2) collected in an alkali grassland area, a set of data products were calculated at 0.5 ×0.5 m resolution. These represent various aspects of radiometric and geometric texture. A Random Forest machine learning classifier was developed to create fuzzy vegetation maps of classes of interest based on these data products. In the next step, either classification results or LIDAR data products were selected as proxies for individual Natura 2000 conservation status variables, and fine-tuned based on field references. These proxies showed adequate performance and were summarized to deliver Natura 2000 conservation status with 80% overall accuracy compared to field references. This study draws attention to the potential of LIDAR for regional-scale Essential Biodiversity variables, and also holds implications for global-scale mapping. These are (i) the use of sensor data products together with habitat-level classification, (ii) the utility of seasonal data, including for non-seasonal variables such as grassland canopy structure, and (iii) the potential of fuzzy mapping-derived class probabilities as proxies for species presence and absence.
Fake followers?are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere - hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel?Class A?classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95% of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier. The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers.
This dissertation contributes to the empirical analysis of economic development. The continuing poverty in many Sub-Saharan-African countries as well as the declining trend in growth in the advanced economies that was initiated around the turn of the millennium raises a number of new questions which have received little attention in recent empirical studies. Is culture a decisive factor for economic development? Do larger financial markets trigger positive stimuli with regard to incomes, or is the recent increase in their size in advanced economies detrimental to economic growth? What causes secular stagnation, i.e. the reduction in growth rates of the advanced economies observable over the past 20 years? What is the role of inequality in the growth process, and how do governmental attempts to equalize the income distribution affect economic development? And finally: Is the process of democratization accompanied by an increase in living standards? These are the central questions of this doctoral thesis. To facilitate the empirical analysis of the determinants of economic growth, this dissertation introduces a new method to compute classifications in the field of social sciences. The approach is based on mathematical algorithms of machine learning and pattern recognition. Whereas the construction of indices typically relies on arbitrary assumptions regarding the aggregation strategy of the underlying attributes, utilization of Support Vector Machines transfers the question of how to aggregate the individual components into a non-linear optimization problem. Following a brief overview of the theoretical models of economic growth provided in the first chapter, the second chapter illustrates the importance of culture in explaining the differences in incomes across the globe. In particular, if inhabitants have a lower average degree of risk-aversion, the implementation of new technology proceeds much faster compared with countries with a lower tendency towards risk. However, this effect depends on the legal and political framework of the countries, their average level of education, and their stage of development. The initial wealth of individuals is often not sufficient to cover the cost of investments in both education and new technologies. By providing loans, a developed financial sector may help to overcome this shortage. However, the investigations in the third chapter show that this mechanism is dependent on the development levels of the economies. In poor countries, growth of the financial sector leads to better education and higher investment levels. This effect diminishes along the development process, as intermediary activity is increasingly replaced by speculative transactions. Particularly in times of low technological innovation, an increasing financial sector has a negative impact on economic development. In fact, the world economy is currently in a phase of this kind. Since the turn of the millennium, growth rates in the advanced economies have experienced a multi-national decline, leading to an intense debate about "secular stagnation" initiated at the beginning of 2015. The fourth chapter deals with this phenomenon and shows that the growth potentials of new technologies have been gradually declining since the beginning of the 2000s. If incomes are unequally distributed, some individuals can invest less in education and technological innovations, which is why the fifth chapter identifies an overall negative effect of inequality on growth. This influence, however, depends on the development level of countries. While the negative effect is strongly pronounced in poor economies with a low degree of equality of opportunity, this influence disappears during the development process. Accordingly, redistributive polices of governments exert a growth-promoting effect in developing countries, while in advanced economies, the fostering of equal opportunities is much more decisive. The sixth chapter analyzes the growth effect of the political environment and shows that the ambiguity of earlier studies is mainly due to unsophisticated measurement of the degree of democratization. To solve this problem, the chapter introduces a new method based on mathematical algorithms of machine learning and pattern recognition. While the approach can be used for various classification problems in the field of social sciences, in this dissertation it is applied for the problem of democracy measurement. Based on different country examples, the chapter shows that the resulting SVMDI is superior to other indices in modeling the level of democracy. The subsequent empirical analysis emphasizes a significantly positive growth effect of democracy measured via SVMDI. ; Die Dissertation beschäftigt sich mit der statistischen und empirischen Analyse der Determinanten langfristiger Wachstumsprozesse. Die anhaltende Armut vieler Staaten in Subsahara-Afrika sowie die rückläufigen Wachstumsraten der reichen Volkswirtschaften seit Beginn des neuen Jahrtausends werfen eine Reihe neuer Fragen auf, die bislang wenig empirisch erforscht sind. Ist die Kultur eines Landes entscheidend für die Entwicklung der Wohlfahrt? Kann ein entwickelter Finanzsektor Wohlstandssteigerungen auslösen, oder ist der starke Zuwachs der Finanzmärkte, der in den Industrienationen beobachtet werden kann, schädlich für die Entwicklung? Warum wachsen reiche Volkswirtschaften heute so viel langsamer als noch vor 20 Jahren? Welchen Einfluss nimmt die zunehmende Einkommensungleichheit auf die wirtschaftliche Entwicklung und welchen Effekt haben staatliche Umverteilungsmaßnahmen? Und schließlich: Führen stärkere politische Rechte und mehr Demokratie zu einer Zunahme der Lebensstandards? Dies sind die zentralen Fragestellungen, denen sich die Kapitel dieser Doktorarbeit widmen. Um die Analyse der empirischen Ursachen der wirtschaftlichen Entwicklung zu erleichtern, leitet die Dissertation überdies ein neues Verfahren zur Lösung von Klassifikationsproblemen in den Sozialwissenschaften ab. Das Verfahren beruht auf mathematischen Algorithmen zur Mustererkennung und des maschinellen Lernens. Dies bietet neue Möglichkeiten für das Konstruieren von Indizes, die zumeist auf arbiträre Annahmen über die zugrundeliegende Aggregationsfunktion zurückgreifen. Das vorgestellte Verfahren löst dieses Problem, in dem die Aggregation durch Support Vector Machines in ein nicht-lineares Optimierungsproblem überführt wird. Nach einem kurzen Überblick über die theoretischen Erklärungsansätze der wirtschaftlichen Entwicklung im ersten Kapitel zeigt das zweite Kapitel, dass kulturelle Unterschieden einen Einfluss auf die ökonomische Entwicklung ausüben. Die Ergebnisse implizieren, dass Länder mit einer geringeren Aversion gegen Risiken höhere Wachstumsraten aufweisen, da in diesen Ländern die Implementierung neuer Technologien schneller voranschreitet. Der Effekt ist allerdings abhängig von den rechtlichen und politischen Rahmenbedingungen der Länder, vom durchschnittlichen Bildungsniveau sowie vom Entwicklungsstand. In vielen Fällen reichen die Vermögen der Individuen nicht aus, um Investitionen in Bildung, Kapital und neue Technologien zu stemmen. Durch die Bereitstellung von Krediten kann ein entwickelter Finanzsektor diesen Engpass überwinden. Die Untersuchungen des dritten Kapitels zeigen allerdings, dass dieser Mechanismus abhängig ist von den Entwicklungsniveaus der Volkswirtschaften. In armen Ländern führt ein Anstieg des Finanzsektors zu besserer Bildung und höheren Investitionen. Dieser Effekt nimmt jedoch über den Entwicklungsprozess ab, da die Intermediationstätigkeit zunehmend durch spekulative Geschäfte ersetzt wird. Insbesondere in Zeiten mit geringen technologischen Neuerungen wirkt ein wachsender Finanzsektor negativ auf die ökonomische Entwicklung. Aktuell befindet sich die Weltökonomie in einer solchen Phase, was verbunden mit einem multinationalen Rückgang der Wachstumsraten seit Anfang 2015 als "säkulare Stagnation" diskutiert wird. Das vierte Kapitel befasst sich intensiv mit diesem Phänomen und zeigt, dass die gegenwärtigen Wachstumspotenziale neuer Technologien seit Beginn der 2000er Jahre zunehmend auslaufen. Sind die Einkommen ungleich verteilt, so können einige Individuen weniger Investitionen in Bildung und technologische Neuerungen durchführen. Tatsächlich weist das fünfte Kapitel auf einen insgesamt negativen Effekt der Ungleichheit auf das Wachstum hin. Dieser Einfluss ist allerdings abhängig vom Entwicklungsstand der Länder. Während der negative Effekt in armen Volkswirtschaften mit einem geringen Maß an Chancengleichheit stark ausgeprägt ist, verschwindet dieser Einfluss mit zunehmendem Entwicklungsniveau. Entsprechend wirkt direkte staatliche Umverteilung vor allem in unterentwickelten Ländern wachstumsfördernd. In entwickelten Nationen ist hingegen die Schaffung von Chancengleichheit entscheidend. Das sechste Kapitel analysiert den Wachstumseffekt des politischen Umfelds und zeigt, dass die Ambivalenz früherer Studien auf die mangelhafte Abbildung des Demokratiegrades zurückzuführen ist. Zur Lösung dieses Problems wird ein neues Verfahren vorgeschlagen, das auf mathematischen Algorithmen des maschinellen Lernens basiert. Das Verfahren kann in verschiedenen Bereichen der Sozialwissenschaften eingesetzt werden und wird in der Arbeit zur Klassifikation des Demokratiegrades angewandt. Auf Basis verschiedener Länderbeispiele wird deutlich, dass der resultierende SVMDI anderen Indizes in der Modellierung des Demokratiegrades überlegen ist. Die anschließende empirische Analyse betont einen signifikant positiven Effekt des SVMDI auf das Wirtschaftswachstum. ; Why are some nations rich and others poor? What are the sources of long-run economic development and growth? How can living standards be increased? In this book, Klaus Gründler empirically analyses these central economic questions and puts a particular emphasis on the role of technology, inequality, and political institutions. To substantiate his empirical studies, he introduces a new method to compute composite measures and indices that is based on mathematical algorithms from the field of machine learning.
Contemporary theories and studies of economics apply a behavioral approach. Behavioral Economics revolutionized mainstream neo-classical economics in the past years. The success of behavioral economics is reflected by two Nobel Prizes in Economics. The wide range of psychological, economic and sociological laboratory and field experiments proved human beings deviating from rational choices and standard neo-classical profit maximization axioms often failed to explain how human actual behavior. Human beings rather use heuristics in their day-to-day decision making. These mental short cuts enable to cope with a complex world yet also often leave individuals biased and falling astray to decision making failures. Artificial intelligence (AI) driven robots and machines are forecasted to grow dramatically in the next years . AI reflects many algorithms, models and techniques, machine learning, databases and visualizations. One of the main advantages of AI-driven machines is that they follow consistently rational algorithmic rules without being biased. Ethical considerations intend to make AI-driven robots more human and introduce morality into machines. The Uber-Waymo trial made transparent how much artificial intelligence development is impacted by human irrationality and irrational exuberances. It reveals a culture of agile software development, which prioritize releasing the latest software over testing and verification, and one that encourages shortcuts and irrationality. This also give proof that applying artificial intelligence cannot ensure that irrational exuberances are prevented. The reason for this irrational exuberance may have its roots in the exponential growth in computing and storage technologies predicted by Gordon Moore five decades ago. This paper develops a concept how irrational exuberances can be prevented from happening. One general approach for solutioning of the issue is to increase transparency. The paper recommends applying technology to make data more accessible and more readable on the application of artificial intelligence. For this purpose the application of "transparency technology XBRL (eXtensible Business Reporting Language)" is incorporated. XBRL is part of the choice architecture on regulation by governments (Sunstein 2013), which applies nudging for influencing towards a preferred option used by the mass consumers. XBRL is connected to a taxonomy. The paper develops a taxonomy to make application of artificial intelligence more transparent to the public and incorporates ethical considerations. As a business case the strongly growing robo-advice market in Germany is taken. The taxonomy is either inductively derived from the robo-advice market offerings and deductively includes the existing standards on ethical codes for robot's usage and application of artificial intelligence. The paper focus on the way to enhance AI that aligns with human values. How can incentive be provided that AI systems themselves do not become potential objects of moral concern. The main outcome of the paper is that Digitalization implies with AI moral concerns however transparency technologies at the same time also offer way to mitigate such risks. ; Peer reviewed
Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11 - Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups.
eLearn Conference, World Conference on eLearning in Corporate, Government, Healthcare & Higher Education, Vancouver, Canada, 27 October 2009, Keynote address by Sir John Daniel, Commonwealth of Learning // What is technology and what are its strengths? We shall define technology in a people-centred way and recall Adam's Smith's analysis of technology in terms of division of labour, specialisation, economies of scale and machines. In other areas of life technology has used these principles to increase access, improve quality and lower costs. Is this true of eLearning? If not, how can we make it true? Will Open Educational Resources save the day? We shall address these questions in the light of the experience of the Commonwealth of Learning in 45 developing countries, three-quarters of them being small states.
While methods of artificial intelligence (AI) were until a few years ago exclusively a topic of scientific discussions, today they are increasingly finding their way into products of everyday life. At the same time, the amount of data produced and available is growing due to increasing digitization, the integration of digital measurement and control systems, and automatic exchange between devices (Internet of Things). In the future, the use of business intelligence (BI) and a look into the past will no longer be sufficient for most companies. Instead, business analytics, i.e., predictive and predictive analyses and automated decisions, will be needed to stay competitive in the future. The use of growing amounts of data is a significant challenge and one of the most important areas of data analysis is represented by artificial intelligence methods. This book provides a concise introduction to the essential aspects of using artificial intelligence methods for business analytics, presents machine learning and the most important algorithms in a comprehensible form based on the business analytics technology framework, and shows application scenarios from various industries. In addition, it provides the Business Analytics Model for Artificial Intelligence, a reference procedure model for structuring BA and AI projects in the company. The Content Business Analytics Artificial Intelligence AI and BA platforms Technology framework and procedure model as reference Case studies on the use of AI-based business analytics The Author Felix Weber is a researcher at the University of Duisburg-Essen with a focus on digitalization, artificial intelligence, price, promotion, assortment management, and transformation management. At the Chair of Business Informatics and Integrated Information Systems, he founded the Retail Artificial Intelligence Lab (retAIL). At the same time, he also worked on various jobs as a consultant for SAP systems in retail, Head of Data Science and as Head of ERP. He thus combines current practice with scientific research in this subfield. This book is a translation of an original German edition. The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content, so that the book will read stylistically differently from a conventional translation.
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems' inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems. For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application's intermittent failure and exposing issues in the data that cause a data-driven system to malfunction.
Highly tensile manganese steel is in great demand owing to its high tensile strength under shock loads. All workpieces are produced through casting, because it is highly difficult to machine. The probabilistic aspects of its casting, its variable composition, and the different casting techniques must all be considered for the optimisation of its mechanical properties. A hybrid strategy is therefore proposed which combines decision trees and artificial neural networks (ANNs) for accurate and reliable prediction models for ore crushing plate lifetimes. The strategic blend of these two high-accuracy prediction models is used to generate simple decision trees which can reveal the main dataset features, thereby facilitating decision-making. Following a complexity analysis of a dataset with 450 different plates, the best model consisted of 9 different multilayer perceptrons, the inputs of which were only the Fe and Mn plate compositions. The model recorded a low root mean square error (RMSE) of only 0.0614 h for the lifetime of the plate: a very accurate result considering their varied lifetimes of between 746 and 6902 h in the dataset. Finally, the use of these models under real industrial conditions is presented in a heat map, namely a 2D representation of the main manufacturing process inputs with a colour scale which shows the predicted output, i.e. the expected lifetime of the manufactured plates. Thus, the hybrid strategy extracts core training dataset information in high-accuracy prediction models. This novel strategy merges the different capabilities of two families of machine-learning algorithms. It provides a high-accuracy industrial tool for the prediction of the full lifetime of highly tensile manganese steel plates. The results yielded a precision prediction of (RMSE of 0.061 h) for the full lifetime of (light, medium, and heavy) crusher plates manufactured with the three (experimental, classic, and highly efficient (new)) casting methods. ; Government of the Russian Federation, Russia (contractNo02.A03.21.0011), by theproject TIN2015-67534-P of the Ministerio de Economía Competitividad of the Spanish Government, Spain, and the project BU085P17 of the Junta de Castilla y León (both projects co-financed through European-Union FEDER funds) and by the Consejería de Educación of the Junta de Castilla y León and the European Social Fund with the EDU/1100/2017 pre-doctoral fellowships
At the request of the U.S. Department of Energy's (DOE) Office of Science (SC) Advanced Scientific Computing Research (ASCR) program office, a workshop was held June 2-3, 2015, in Gaithersburg, MD, to identify potential long term (10 to +20 year) cybersecurity fundamental basic research and development challenges, strategies and roadmap facing future high performance computing (HPC), networks, data centers, and extreme-scale scientific user facilities. This workshop was a follow-on to the workshop held January 7-9, 2015, in Rockville, MD, that examined higher level ideas about scientific computing integrity specific to the mission of the DOE Office of Science. Issues included research computation and simulation that takes place on ASCR computing facilities and networks, as well as network-connected scientific instruments, such as those run by various DOE Office of Science programs. Workshop participants included researchers and operational staff from DOE national laboratories, as well as academic researchers and industry experts. Participants were selected based on the submission of abstracts relating to the topics discussed in the previous workshop report [1] and also from other ASCR reports, including "Abstract Machine Models and Proxy Architectures for Exascale Computing" [27], the DOE "Preliminary Conceptual Design for an Exascale Computing Initiative" [28], and the January 2015 machine learning workshop [29]. The workshop was also attended by several observers from DOE and other government agencies. The workshop was divided into three topic areas: (1) Trustworthy Supercomputing, (2) Extreme-Scale Data, Knowledge, and Analytics for Understanding and Improving Cybersecurity, and (3) Trust within High-end Networking and Data Centers. Participants were divided into three corresponding teams based on the category of their abstracts. The workshop began with a series of talks from the program manager and workshop chair, followed by the leaders for each of the three topics and a representative of each of the four major DOE Office of Science Advanced Scientific Computing Research Facilities: the Argonne Leadership Computing Facility (ALCF), the Energy Sciences Network (ESnet), the National Energy Research Scientific Computing Center (NERSC), and the Oak Ridge Leadership Computing Facility (OLCF). The rest of the workshop consisted of topical breakout discussions and focused writing periods that produced much of this report.
Klappentext: The definitive guide to the game-theoretic and probabilistic underpinning for Bitcoin's security model. The book begins with an overview of probability and game theory. Nakamoto Consensus is discussed in both practical and theoretical terms. This volume: Describes attacks and exploits with mathematical justifications, including selfish mining. Identifies common assumptions such as the Market Fragility Hypothesis, establishing a framework for analyzing incentives to attack. Outlines the block reward schedule and economics of ASIC mining. Discusses how adoption by institutions would fundamentally change the security model. Analyzes incentives for double-spend and sabotage attacks via stock-flow models. Overviews coalitional game theory with applications to majority takeover attacks presents Nash bargaining with application to unregulated environments. This book is intended for students or researchers wanting to engage in a serious conversation about the future viability of Bitcoin as a decentralized, censorship-resistant, peer-to-peer electronic cash system.