Algorithms and Artificial Intelligence (AI) in the public sector have been adopted in different contexts. Conceptually, the implications of AI and algorithms in public administration are still unclear. However, different international organizations have defined a series of ethical principles and legal issues to take into account. The European Union institutions have become a key player in guiding the adoption of AI, setting the first guidelines for public policies at the national level. The Spanish Government has developed a strategy according to these lines, albeit with those responsible for implementation having their own perceptions and priorities. Hence, this paper begins with the following descriptive and exploratory questions: How are the main international organizations framing AI ethically and legally? What is the framework of priorities of the AI policy endorsed by the Government of Spain in connection with the European Union? What is the perception about AI by IT managers (CIOs) in ministries? To answer these questions, this study is based on a documentary analysis and part of the results of a survey about the perceptions of public managers responsible for IT implementation in Spanish ministries. Our analysis encompasses an exploratory answer to the research questions guiding the study. In addition, the results indicate some key issues in designing policies and strategies to use AI in different politico-administrative contexts. ; Los algoritmos y la Inteligencia Artificial (IA) en el sector público se están adoptando en diferentes contextos. Conceptualmente, todavía no están claras las implicaciones de la IA y los algoritmos en las administraciones públicas. Sin embargo, diferentes organizaciones internacionales han definido una serie de principios éticos y cuestiones legales a tener en cuenta. Las instituciones de la Unión Europea se han convertido en el actor clave para guiar la adopción de IA, estableciendo las primeras líneas para las políticas públicas a nivel nacional. El Gobierno de España ha comenzado a desarrollar una estrategia en esa línea, si bien con unos responsables de políticas tecnológicas encargados de su implementación, que cuentan con sus propias percepciones y prioridades. Este artículo se inicia con las siguientes preguntas de carácter descriptivo y exploratorio: ¿Cómo están enmarcando ética y jurídicamente la IA los principales organismos internacionales? ¿Cuál es el marco de prioridades de la política de IA promovida por el Gobierno de España en relación con la Unión Europea? ¿Cuál es la percepción sobre la IA de los responsables de gestión de tecnologías (CIOs) en los ministerios? Para responder a estas preguntas, este trabajo se nutre de una revisión documental y de parte de los resultados de un cuestionario sobre las percepciones de los responsables de gestión de las tecnologías en los ministerios españoles. Nuestro análisis permite responder exploratoriamente las preguntas de investigación del estudio. Además, los resultados señalan algunas claves para diseñar políticas y estrategias de uso de la IA en diferentes contextos político-administrativos.
Artificial reproduction now offers sub fertile couples a number of options which raise scientific and ethical questions. This article discusses the Canadian and British experiences in formulating regulations and legislation in this important field. Current work on mammalian embryo research foretells the direction which human research will take. This article stresses the need for family physicians' participation in the ethical decisions that accompany these new developments.
The book presents a peer-reviewed collection of papers presented during the 10th issue of the Artificial Economics conference, addressing a variety of issues related to macroeconomics, industrial organization, networks, management and finance, as well as purely methodological issues. The field of artificial economics covers a broad range of methodologies relying on computer simulations in order to model and study the complexity of economic and social phenomena. The grounding principle of artificial economics is the analysis of aggregate properties of simulated systems populated by interacting adaptive agents that are equipped with heterogeneous individual behavioral rules. These macroscopic properties are neither foreseen nor intended by the artificial agents but generated collectively by them. They are emerging characteristics of such artificially simulated systems
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
The book presents a peer-reviewed collection of papers presented during the 10th issue of the Artificial Economics conference, addressing a variety of issues related to macroeconomics, industrial organization, networks, management and finance, as well as purely methodological issues. The field of artificial economics covers a broad range of methodologies relying on computer simulations in order to model and study the complexity of economic and social phenomena. The grounding principle of artificial economics is the analysis of aggregate properties of simulated systems populated by interacting adaptive agents that are equipped with heterogeneous individual behavioral rules. These macroscopic properties are neither foreseen nor intended by the artificial agents but generated collectively by them. They are emerging characteristics of such artificially simulated systems.
Research on the issues and problems of the introduction of artificial intelligence into various spheres of life of a modern individual, the society, and the state are in the trend of scientific research at the present stage of information technology development. Artificial intelligence (AI) is being used now more than ever before, especially in the healthcare sector. Artificial intelligence has provided a more efficient way to automate routine work and other daily tasks, as well as manage patients and medical resources. The system can perform most of the tasks previously performed by humans, making it faster and cheaper. This significant advantage has facilitated the activities of interacting parties in the healthcare sector, doctors and patients. Artificial intelligence continues to gain momentum. Currently, there are modern machine learning solutions that can act, learn, understand, and predict. This is a step forward compared to robotic assistants in surgical interventions and the binding of genetic codes previously controlled by artificial intelligence. The article demonstrates that the development of artificial intelligence in healthcare involves some risks and problems — artificial intelligence systems expose patients to the risk of injury, and patient data for use in artificial intelligence systems involve the risk of privacy invasion. The article shows the advantages and disadvantages of the introduction of artificial intelligence in the field of healthcare, and the necessary conclusions are drawn.
Artificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities. ; University of Białystok, Poland ; Rafał Rejmaniak is Assistant Professor in the Department of Historical and Legal Sciences, Theory and Philosophy of Law, and Comparative Law at the Faculty of Law, University of Białystok, Poland. ; r.rejmaniak@uwb.edu.pl ; 25 ; 42 ; 3 ; Angwin J., Larson J., Mattu S. and Kirchner L., Machine Bias, ProPublica 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. ; Barfield W. and Pagallo U., Advanced Introduction to Law and Artificial Intelligence, Cheltenham/Northampton 2020. ; Barocas S. and Selbst A.D., Big Data's disparate impact, "California Law Review" 2016, vol. 104, no. 2. ; Berendt B., Preibusch S., Toward accountable discrimination-aware data mining: The importance of keeping human in the loop – and under the looking-glass, "Big Data" 2017, vol. 5, no. 2. ; Boden M.A., Sztuczna inteligencja. Jej natura i przyszłość, trans. T. Sieczkowski, Łódź 2020. ; Borysiak W. and Bosek L., Komentarz do art. 32, (in:) M. Safjan and L. Bosek (eds.), Konstytucja RP. Tom I. Komentarz do art. 1–86, Warsaw 2016. ; Brennan T., Dieterich W. and Ehret B., Evaluating the predictive validity of the COMPAS risk and needs assessment system, "Criminal Justice and Behavior" 2009, vol. 36, no. 1. ; Cataleta M.S. and Cataleta A., Artificial Intelligence and Human Rights, an Unequal Struggle, "CIFILE Journal of International Law" 2020, vol. 1, no. 2. ; Coeckelbergh M., AI Ethics, Cambridge/London 2020. ; Cummings M.L., Automation and Accountability in Decision Support System Interface Design, "The Journal of Technology Studies" 2006, vol. 32, no. 1. ; Danks D. and London A.J., Algorithmic Bias in Autonomous Systems, 'Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)', https://www.cmu.edu/ dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf. ; Davenport T. and Kalakota R., The potential for artificial intelligence in healthcare, "Future Healthcare Journal" 2019, vol. 6, no. 2. ; Dymitruk M., Sztuczna inteligencja w wymiarze sprawiedliwości? (in:) L. Lai and M. Świerczyński (eds.), Prawo sztucznej inteligencji, Warsaw 2020. ; European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). ; Fjeld J., Achten N., Hilligoss H., Nagy A. and Srikumar M., Principled Artifi cial Intelligence. Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Cambridge 2020. ; Flasiński M., Wstęp do sztucznej inteligencji, Warsaw 2020. ; Fry H., Hello world. Jak być człowiekiem w epoce maszyn, trans. S. Musielak, Krakow 2019. ; German S., Bienstock E. and Doursat R., Neural networks and bias/variance dilemma, "Neural Computation" 1992, vol. 4, no. 1. ; Hacker P., Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law, "Common Market Law Review" 2018, vol. 55. ; High-Level Expert Group on Artificial Intelligence (appointed by the European Commission in June 2018), A Defi nition of Artificial Intelligence: Main Capabilities and Scientifi c Disciplines, Brussels 2019. ; High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, Brussels 2019. ; Jernigan C. and Mistree B.F., Gaydar: Facebook friendships expose sexual orientation, "First Monday" 2009, vol. 14, no. 10. ; Kasperska A., Problemy zastosowania sztucznych sieci neuronalnych w praktyce prawniczej, "Przegląd Prawa Publicznego" 2017, no. 11. ; Lattimore F., O'Callaghan S., Paleologos Z., Reid A., Santow E., Sargeant H. and Thomsen A., Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Technical Paper, Australian Human Rights Commission, Sydney 2020. ; Massey G. and Ehrensberger-Dow M., Machine learning: Implications for translator education, "Lebende Sprachen" 2017, vol. 62, no. 2. ; Michie D., Methodologies from Machine Learning in Data Analysis and Soft ware, "The Computer Journal" 1991, vol. 34, no. 6. ; Neff G. and Nagy P., Talking to Bots: Symbiotic Agency and the Case of Tay, "International Journal of Communication" 2016, no. 10. ; Ntoutsi E., Fafalios P., Gadiraju U., Iosifidis V., Nejdl W., Vidal M.-E., Ruggieri S., Turini F., Papadopoulos S., Krasanakis E., Kompatsiaris I., Kinder-Kurlanda K., Wagner C., Karimi F., Fernandez M., Alani H., Berendt B., Kruegel T., Heinze Ch., Broelemann K., Kasneci G., Tiropanis T. and Staab S., Bias in data-driven artificial intelligence systems – An introductory survey, "WIREs Data Mining Knowledge Discovery" 2020, vol. 10, no. 3. ; O'Neil C., Broń matematycznej zagłady. Jak algorytmy zwiększają nierówności i zagrażają demokracji, trans. M. Z. Zieliński, Warsaw 2017. ; Raji I.D., Buolamwini J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, 'Conference on Artificial Intelligence, Ethics, and Society' 2019, https://www.media.mit.edu/publications/actionable-auditing-investigatingthe-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/. ; Ribeiro M.T., Singh S. and Guestrin C., "Why Should I Trust You?" Explaining the Predictions of AnyClassifier, "22nd ACM SIGKDD International Conference 2016, San Francisco", https://www. kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf. ; Rodrigues R., Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, "Journal of Responsible Technology" 2020, vol. 4. ; Roselli D., Matthews J., Talagala N., Managing Bias in AI, "Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA", May 2019. ; Rutkowski L., Metody i techniki sztucznej inteligencji, Warsaw 2012. ; White Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, Brussels 2020. ; Yapo A. and Weiss J., Ethical Implications of Bias In Machine Learning, "Proceedings of the Annual Hawaii International Conference on System Sciences" 2018. ; Zuiderveen Borgesius F., Discrimination, artificial intelligence and algorithmic decision-making, Council of Europe, Strasbourg 2018. ; 26
Artificial intelligence (AI) is increasingly reshaping service by performing various tasks, constituting a major source of innovation, yet threatening human jobs. We develop a theory of AI job replacement to address this double-edged impact. The theory specifies four intelligences required for service tasks—mechanical, analytical, intuitive, and empathetic—and lays out the way firms should decide between humans and machines for accomplishing those tasks. AI is developing in a predictable order, with mechanical mostly preceding analytical, analytical mostly preceding intuitive, and intuitive mostly preceding empathetic intelligence. The theory asserts that AI job replacement occurs fundamentally at the task level, rather than the job level, and for "lower" (easier for AI) intelligence tasks first. AI first replaces some of a service job's tasks, a transition stage seen as augmentation, and then progresses to replace human labor entirely when it has the ability to take over all of a job's tasks. The progression of AI task replacement from lower to higher intelligences results in predictable shifts over time in the relative importance of the intelligences for service employees. An important implication from our theory is that analytical skills will become less important, as AI takes over more analytical tasks, giving the "softer" intuitive and empathetic skills even more importance for service employees. Eventually, AI will be capable of performing even the intuitive and empathetic tasks, which enables innovative ways of human–machine integration for providing service but also results in a fundamental threat for human employment.