The speed of development in Big Data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects. We are moving towards changes in how ethics has to be perceived: away from individual decisions with specific and knowable outcomes, towards actions by many unaware that they may have taken actions with unintended consequences for anyone. Responses will require a rethinking of ethical choices, the lack thereof and how this will guide scientists, governments, and corporate agencies in handling Big Data. This essay elaborates on the ways Big Data impacts on ethical conceptions.
In: Tække , J & Paulsen , M 2020 , ' Acting with and against Big Data in School and Society : The Big Democratic Questions of Big Data ' , The Journal of Communication and Media Studies , vol. 5 , no. 3 . https://doi.org/10.18848/2470-9247/CGP/v05i03/15-31
In this article, we discuss on a general and mainly theoretical-conceptual level how schools and societies can/should respond to Big Data. Firstly, we identify what Big Data is. Based on Levi Bryant's onto-cartography, we suggest that Big Data can be characterized ontologically as different socio-technical mechanic assemblages. These assemblages comprise different combinations of social systems, big states and big companies, and Big Data technologies on the one hand; and citizens, not least teachers and students, on the other hand. Secondly, we present three different assembling scenarios: 1) a state model, where a Big State in alliance with new technology (and companies) uses Big Data to control citizens and students; 2) a market model, where Big Companies in alliance with new technology (and politicians) are free to use Big Data to nudge citizens and students; 3) a democratic model, where citizens and students are protected by the state from being surveilled, controlled and nudged by new big Cyborgs and also educated to be critical of and act with and against Big Data.
Any peace process is an exercise in the negotiation of big data. From centuries old communal hagiography to the reams of official texts, media coverage and social media updates, peace negotiations generate data. Peacebuilding and peacekeeping today are informed by, often respond and contribute to big data. This is no easy task. As recently as a few years ago, before the term big data embraced the virtual on the web, what informed peace process design and implementation was in the physical domain – from contested borders and resources to background information in the form of text. The move from analogue, face-to-face negotiations to online, asynchronous, web-mediated negotiations – which can still include real world meetings – has profound implications for how peace is strengthened in fragile democracies.
The adoption of European norms has led to a high degree of legal and policy convergence. However, national laws remain divergent in many ways, and the assessment of discrepancies is becoming increasingly problematic because of the growing volumes of legislation. So far, comparative law has been the go-to solution for investigating similarities and differences between Member States as a result of harmonization policies. However, the vast complexity of current multilevel governance structures and their policies challenges the usefulness of classical comparative law as a method of investigation for such meta-issues. This article holds that comparative law can be considered Big Data and investigated as such. In other words, more complex and voluminous law and legal interpretations can be either coded into more quantitative observations on the basis of the premises of numerical comparative law, or, alternatively, they can be analysed with the help of new research technologies.
Master's thesis in Industral Economics ; The European Commission has implemented the General Data Protection Regulation (GDPR) which will replace the current, but obsolete, Data Protection Directive 95/46/EC. When legally effective, May 25th 2018, it will impose a much stricter regime and sanctions which magnitude may force bankruptcy. It increases dramatically the scope of what is considered personal data while restricting the processing as such. Thus, curtailing businesses' opportunity to drive value through big data analytics. In an increasingly data-driven economy, where data is drawn in the same breath as competitive advantage, it may seem like the candle is burned at both ends. Pursuant to the issue a question arises to whether the value of data will diminish. Consequently, this work researches how the GDPR will impact the value of data, with an emphasis on value driven trough the big data value chain. The research is carried out in three phase: A preliminary analysis that identifies a set of value drivers; a primary analysis that identifies influences from the GDPR on said value drivers; and a case study on smart meter data. The results are presented as five assertions which make up the foundation of a discussion. The research finds that the short-term impact raises concern to limitations put on: realizing value in public interest; harnessing the power of algorithms in automated decision-making; and discovery of new knowledge through data mining. However, the positive long-term impact are expected to overshadow the negatives and to ensure a sustainable data-economy in the future. The research concerns a legislation that is yet to be enforced. The results are therefore predictions rather than hard facts, but will serve as insight to possible future challenges. ; submittedVersion
One of the biggest questions in the digital age pertains to the relevance of democracy in the era of big data. Unquestioningly, the digital revolution is growing at a rapid pace and many are being caught unaware by its impact in various avenues. The amount of data available has been doubling year on year, and the conditions of usage have been evolving at a pace faster than the policies are being provided to ensure proper usage. Many have started looking for ways of turning big data into big money, an aspect that is coming at the expense of the democratic values countries have upheld for generations. Instead, it is becoming a huge problem as the history of humankind is becoming more documented now and messages are easier to send compared to a century ago. Understanding the impact of big data on democracy can help accentuate the best way of improving democratic institutions and their ability to overcome the pressure coming from evolving technology.
In: Hansen , D R , Bøje , J D & Balslev , G M 2020 , ' Digitalization, Big Data and Fantasies ' , Rethinking the futures of education in the Nordic countries , 04/03/2020 - 06/03/2020 .
Digitalization, Big Data and fantasies in education One may be surprised what digitalization and Big Data are being used for in education. Through digital technologies, Big Data is being gathered to provide access for politicians and the public to school matters in general and specifically if schools and teachers do not reach determined goals. For example, attainment of socioeconomic reference indicators, measured by students' grade point average. Many fantasies are attached to digitalization and Big Data. Fantasies about increased transparency, safety, and prediction (Zuboff, 2019). However, digitalization and Big Data may also produce a culture of 'shaming and blaming', displaying those schools that do not live up to fixed goals. This in turn may lead to goal-fixation, nearsightedness, and 'prophylactic' reasoning in schools. In this paper we analyze the fantasies and powerful beliefs that make it difficult to problematize and critically reflect on the emergence of digitalization and Big Data. Inspired by psychoanalytical theory (Zizek, 2008), political and organizational theory (Ball, 2008; Agamben, 2013; Weick et al. 2005; Gioia & Chittipeddi, 1991), and anthropological studies of the performance of magic in 'primitive' and 'modern' organizations (Malinowski, 1948; Clark and Salaman, 1996), we will discuss examples of fantasies from previous and ongoing research (Rüsselbæk Hansen & Phelan, 2019); Bøje et al., 2018; Balslev & Raae 2019). We argue that delicate balances between monitoring and democracy are tipping. Furthermore, we argue that digitalization and Big Data support and are being supported by a neoliberal fantasy where measurement, clarity, comparison, and competition set the educational scene (Brown, 2015). That way a certain regime is produced which seems to regulate schools, teachers and students in ways that risk dismantling democratic engagement and conversation. This scenario will be illustrated as well as discussed: how can this be avoided so democracy does not turn into another lost ideal in education?
One way to study the politics of big data is the inspection of their assemblages. By opening up the "black boxes" of data assemblages, it is possible to reconstruct the choices, compromises, conflicts and agree-ments that contributed to the construction of a given datum. Leaning on in-depth interviews and focus groups with experts and specialists who work within three European data centers, we unveil the interde-pendence between social and technical aspects and between a series of internal and external actors, which all contribute to the data assemblage. Results show that communities of experts, technologies, stakeholders and end-users are entwined components that interact amongst them in a contingent and complex web of negotiations and constraints and frame what is possible, desirable and expected by data.
The purpose of the article is to examine the rationale behind the argument of the geopolitical nature of Big Data, associated with Artificial Intelligence (AI). To this end, it advances in the classification of different dimensions that extend its understanding from its methodological, cognitive, ideological, geopolitical and practical function, as well as in the study of the factors that determine its geopolitical nature. This leads to the qualitative transformations that characterize the content and impact of geopolitical competition in the information society. The central conclusion is that the technological advances that express the geopolitical nature of these tools are modifying power relations and the way in which States, and these with their territories, relate to each other, thus revolutionizing traditional notions and approaches to understanding geopolitics in the context of the information society. In this sense, national interests and their strategic lines are being rethought, as well as the projection of their power in the political geography of nations on an international scale. For the development of the research, evaluations were applied from the qualitative paradigm, applying the method of triangulation of authors, which allowed the identification of the elements that distinguish the debate about the geopolitical nature of the united Big Data and Artificial Intelligence. ; El propósito del artículo consiste en examinar los fundamentos que sirven de base para la argumentación de la naturaleza geopolítica del Big Data, asociado a la Inteligencia Artificial (IA). Para ello se avanza en la clasificación de diferentes dimensiones que amplían su comprensión desde su función metodológica, cognoscitiva, ideológica, geopolítica y práctica, así como en el estudio de los factores que determinan su naturaleza geopolítica. De ello se inducen las transformaciones cualitativas de que caracterizan el contenido e impacto de la competencia geopolítica de la sociedad de la información. La conclusión central es que los avances tecnológicos que expresan naturaleza geopolítica de estas herramientas, están modificando las relaciones de poder y la manera en que se relacionan los Estados, y estos con sus territorios, con lo que se están revolucionando las nociones y enfoques tradicionales de entender la geopolítica en el contexto de la sociedad de la información. En este sentido, se están repensando los intereses nacionales y sus líneas estratégicas, así como, la proyección de su poder en la geografia política de las naciones a escala internacional. Para el desarrollo de la investigación se aplicaron valoraciones desde el paradigma cualitativo, aplicando el método de triangulación de autores, que permitió identificar los elementos que distinguen el debate acerca de la naturaleza geopolítica del Big Data unido y a la Inteligencia Artificial.
There is no doubt that one of the most obvious and far-reaching derivations of the Internet and global interconnection through the network is the enormous volume of information to which we have access. It is in this context that the so-called "Big Data" appears, exposing us to great changes in the different areas of our lives, proposing scenarios that point to open governments, transparency and greater closeness to citizens. However, there are many challenges that this new reality poses on Public Administration and there appears not to be unique strategies or models for its implementation. The aim of this work is to review some of the most important concepts that are involved in this era of Big Data in the public sector.
Big Data for Qualitative Research covers everything small data researchers need to know about big data, from the potentials of big data analytics to its methodological and ethical challenges. The data that we generate in everyday life is now digitally mediated, stored, and analyzed by web sites, companies, institutions, and governments. Big data is large volume, rapidly generated, digitally encoded information that is often related to other networked data, and can provide valuable evidence for study of phenomena. This book explores the potentials of qualitative methods and analysis for big data, including text mining, sentiment analysis, information and data visualization, netnography, follow-the-thing methods, mobile research methods, multimodal analysis, and rhythmanalysis. It debates new concerns about ethics, privacy, and dataveillance for big data qualitative researchers. This book is essential reading for those who do qualitative and mixed methods research, and are curious, excited, or even skeptical about big data and what it means for future research. Now is the time for researchers to understand, debate, and envisage the new possibilities and challenges of the rapidly developing and dynamic field of big data from the vantage point of the qualitative researcher.
The interaction between information, innovation and market outcomes is shaping the modern digital industries of the 21st century: the business models of search engines, social networks, e-commerce websites and marketplaces are highly reliant on the ability to gather and process large amount of data. At the same time, it is increasingly recognized that the use of Big Data by online platforms and intermediaries has far-reaching consequences not only on economic activity, but also on social and political mechanisms: technological developments affecting information flows affect the organization of markets as well as the nature of individual interactions and the functioning of the political process. A set of complementary policy tools is needed to define a comprehensive governance of online markets that effectively protects competition, consumers as well as individuals' privacy and media pluralism. Not all sensitive issues raised by Big Data are also competition issues. However, because of the high degree of concentration that characterizes online markets, antitrust policy finds itself at the crossroad between Big Data and the transformative effects that the Internet is having on the economy and on society. Antitrust enforcement is well equipped and sufficiently flexible to adapt its analytical tools to deal effectively with potential data-driven competition problems and thus contribute to the economic governance of digital market. This article focuses on the implications that Big Data have for antitrust enforcement, i.e. on the potential application of competition law to those (pathological) situations in which Big Data might be used by a dominant company to foreclose competitors or to exploit consumers, might be a relevant factor in the assessment of mergers' anticompetitive effects or might facilitate collusive behavior.
The government is liable to help every citizen for employability, minimum facilitation of livelihood and almost all countries have shown their concern for. There are several ways of helping the poor and weaker section of society as well as all resident of a community/area or whole population and one of them subsidy. All subsidy schemes have two objectives, to help the common man and to upgrade their economic and living status. These have several success and failure stories and need up-gradation at every actionable day to protect the frauds, minimize the financial burden to government and maximize the impact of schemes. The main objective of the present study is to assess the issues associated with government subsidies and proposing the solution from Big data, most accepted source of information in current information era. The study review the source of information which may be usable for the subsidy mechanism and scaled it through data driven technique which will be more transparent, democratic and recommend the more responsible governance system after assigning the stepwise monitoring and verification from the data.
Abstract The delivery of psychiatric care is changing with a new emphasis on integrated care, preventative measures, population health, and the biological basis of disease. Fundamental to this transformation are big data and advances in the ability to analyze these data. The impact of big data on the routine treatment of bipolar disorder today and in the near future is discussed, with examples that relate to health policy, the discovery of new associations, and the study of rare events. The primary sources of big data today are electronic medical records (EMR), claims, and registry data from providers and payers. In the near future, data created by patients from active monitoring, passive monitoring of Internet and smartphone activities, and from sensors may be integrated with the EMR. Diverse data sources from outside of medicine, such as government financial data, will be linked for research. Over the long term, genetic and imaging data will be integrated with the EMR, and there will be more emphasis on predictive models. Many technical challenges remain when analyzing big data that relates to size, heterogeneity, complexity, and unstructured text data in the EMR. Human judgement and subject matter expertise are critical parts of big data analysis, and the active participation of psychiatrists is needed throughout the analytical process.
Big data applications in the transport sector have achieved national and EU-level interest as a driver for future economic growth and at the same time a source of concern, in terms of negative socio-economic impacts. This report reviews current policies implemented in the EU, its Member States and internationally, which support or restrict the (re-) use, linking of and sharing of data, in the context of big data techniques and in the transport sector. Also, the report illustrates in selected examples of transport-related private companies, the types of private sector policies that have been adopted or promoted. While there are not any distinctly big data policies, each political entity has implemented some policies aimed at protecting the privacy of its citizens, encouraging data sharing among private and public sector entities, and develop policies that support the digitalization of the transport sector. Some of the key areas of policy in the transport sector are for instance the implementation of Intelligent Transport System, the increased Open Data policies, Automated Driving, and Smart Mobility. Preceding and in light of these developments, the private sector has also moved ahead to incorporate the use of big data techniques into their own business models as process or product innovations. The potential applications in the transport sector are diverse, as digitalization is a major trend of the transport sector. The report covers six distinct transport sub-sectors, where the application of big data is or potentially could be used. The aim is to highlight the challenges and enablers of data sharing in the different cases. The cases are: Railway Operators, Open Data in the Airport Operator Context, Real-time Road Traffic Management, Big Data in Supply Chain Management, Managing Port operations, and Connected and Automated Vehicles. The development of a policy roadmap to foster the growth of big data in transport will require an understanding of how existing policies affect the economic, political, social and ...