Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Alternativ können Sie versuchen, selbst über Ihren lokalen Bibliothekskatalog auf das gewünschte Dokument zuzugreifen.
Bei Zugriffsproblemen kontaktieren Sie uns gern.
14783 Ergebnisse
Sortierung:
In: Social'naja politika i social'noe partnerstvo (Social Policy and Social Partnership), Heft 1, S. 34-43
Research on the issues and problems of the introduction of artificial intelligence into various spheres of life of a modern individual, the society, and the state are in the trend of scientific research at the present stage of information technology development. Artificial intelligence (AI) is being used now more than ever before, especially in the healthcare sector. Artificial intelligence has provided a more efficient way to automate routine work and other daily tasks, as well as manage patients and medical resources. The system can perform most of the tasks previously performed by humans, making it faster and cheaper. This significant advantage has facilitated the activities of interacting parties in the healthcare sector, doctors and patients. Artificial intelligence continues to gain momentum. Currently, there are modern machine learning solutions that can act, learn, understand, and predict. This is a step forward compared to robotic assistants in surgical interventions and the binding of genetic codes previously controlled by artificial intelligence. The article demonstrates that the development of artificial intelligence in healthcare involves some risks and problems — artificial intelligence systems expose patients to the risk of injury, and patient data for use in artificial intelligence systems involve the risk of privacy invasion. The article shows the advantages and disadvantages of the introduction of artificial intelligence in the field of healthcare, and the necessary conclusions are drawn.
In: Journal of service research, Band 21, Heft 2, S. 155-172
ISSN: 1552-7379
Artificial intelligence (AI) is increasingly reshaping service by performing various tasks, constituting a major source of innovation, yet threatening human jobs. We develop a theory of AI job replacement to address this double-edged impact. The theory specifies four intelligences required for service tasks—mechanical, analytical, intuitive, and empathetic—and lays out the way firms should decide between humans and machines for accomplishing those tasks. AI is developing in a predictable order, with mechanical mostly preceding analytical, analytical mostly preceding intuitive, and intuitive mostly preceding empathetic intelligence. The theory asserts that AI job replacement occurs fundamentally at the task level, rather than the job level, and for "lower" (easier for AI) intelligence tasks first. AI first replaces some of a service job's tasks, a transition stage seen as augmentation, and then progresses to replace human labor entirely when it has the ability to take over all of a job's tasks. The progression of AI task replacement from lower to higher intelligences results in predictable shifts over time in the relative importance of the intelligences for service employees. An important implication from our theory is that analytical skills will become less important, as AI takes over more analytical tasks, giving the "softer" intuitive and empathetic skills even more importance for service employees. Eventually, AI will be capable of performing even the intuitive and empathetic tasks, which enables innovative ways of human–machine integration for providing service but also results in a fundamental threat for human employment.
Artificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities. ; University of Białystok, Poland ; Rafał Rejmaniak is Assistant Professor in the Department of Historical and Legal Sciences, Theory and Philosophy of Law, and Comparative Law at the Faculty of Law, University of Białystok, Poland. ; r.rejmaniak@uwb.edu.pl ; 25 ; 42 ; 3 ; Angwin J., Larson J., Mattu S. and Kirchner L., Machine Bias, ProPublica 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. ; Barfield W. and Pagallo U., Advanced Introduction to Law and Artificial Intelligence, Cheltenham/Northampton 2020. ; Barocas S. and Selbst A.D., Big Data's disparate impact, "California Law Review" 2016, vol. 104, no. 2. ; Berendt B., Preibusch S., Toward accountable discrimination-aware data mining: The importance of keeping human in the loop – and under the looking-glass, "Big Data" 2017, vol. 5, no. 2. ; Boden M.A., Sztuczna inteligencja. Jej natura i przyszłość, trans. T. Sieczkowski, Łódź 2020. ; Borysiak W. and Bosek L., Komentarz do art. 32, (in:) M. Safjan and L. Bosek (eds.), Konstytucja RP. Tom I. Komentarz do art. 1–86, Warsaw 2016. ; Brennan T., Dieterich W. and Ehret B., Evaluating the predictive validity of the COMPAS risk and needs assessment system, "Criminal Justice and Behavior" 2009, vol. 36, no. 1. ; Cataleta M.S. and Cataleta A., Artificial Intelligence and Human Rights, an Unequal Struggle, "CIFILE Journal of International Law" 2020, vol. 1, no. 2. ; Coeckelbergh M., AI Ethics, Cambridge/London 2020. ; Cummings M.L., Automation and Accountability in Decision Support System Interface Design, "The Journal of Technology Studies" 2006, vol. 32, no. 1. ; Danks D. and London A.J., Algorithmic Bias in Autonomous Systems, 'Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)', https://www.cmu.edu/ dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf. ; Davenport T. and Kalakota R., The potential for artificial intelligence in healthcare, "Future Healthcare Journal" 2019, vol. 6, no. 2. ; Dymitruk M., Sztuczna inteligencja w wymiarze sprawiedliwości? (in:) L. Lai and M. Świerczyński (eds.), Prawo sztucznej inteligencji, Warsaw 2020. ; European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). ; Fjeld J., Achten N., Hilligoss H., Nagy A. and Srikumar M., Principled Artifi cial Intelligence. Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Cambridge 2020. ; Flasiński M., Wstęp do sztucznej inteligencji, Warsaw 2020. ; Fry H., Hello world. Jak być człowiekiem w epoce maszyn, trans. S. Musielak, Krakow 2019. ; German S., Bienstock E. and Doursat R., Neural networks and bias/variance dilemma, "Neural Computation" 1992, vol. 4, no. 1. ; Hacker P., Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law, "Common Market Law Review" 2018, vol. 55. ; High-Level Expert Group on Artificial Intelligence (appointed by the European Commission in June 2018), A Defi nition of Artificial Intelligence: Main Capabilities and Scientifi c Disciplines, Brussels 2019. ; High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, Brussels 2019. ; Jernigan C. and Mistree B.F., Gaydar: Facebook friendships expose sexual orientation, "First Monday" 2009, vol. 14, no. 10. ; Kasperska A., Problemy zastosowania sztucznych sieci neuronalnych w praktyce prawniczej, "Przegląd Prawa Publicznego" 2017, no. 11. ; Lattimore F., O'Callaghan S., Paleologos Z., Reid A., Santow E., Sargeant H. and Thomsen A., Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Technical Paper, Australian Human Rights Commission, Sydney 2020. ; Massey G. and Ehrensberger-Dow M., Machine learning: Implications for translator education, "Lebende Sprachen" 2017, vol. 62, no. 2. ; Michie D., Methodologies from Machine Learning in Data Analysis and Soft ware, "The Computer Journal" 1991, vol. 34, no. 6. ; Neff G. and Nagy P., Talking to Bots: Symbiotic Agency and the Case of Tay, "International Journal of Communication" 2016, no. 10. ; Ntoutsi E., Fafalios P., Gadiraju U., Iosifidis V., Nejdl W., Vidal M.-E., Ruggieri S., Turini F., Papadopoulos S., Krasanakis E., Kompatsiaris I., Kinder-Kurlanda K., Wagner C., Karimi F., Fernandez M., Alani H., Berendt B., Kruegel T., Heinze Ch., Broelemann K., Kasneci G., Tiropanis T. and Staab S., Bias in data-driven artificial intelligence systems – An introductory survey, "WIREs Data Mining Knowledge Discovery" 2020, vol. 10, no. 3. ; O'Neil C., Broń matematycznej zagłady. Jak algorytmy zwiększają nierówności i zagrażają demokracji, trans. M. Z. Zieliński, Warsaw 2017. ; Raji I.D., Buolamwini J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, 'Conference on Artificial Intelligence, Ethics, and Society' 2019, https://www.media.mit.edu/publications/actionable-auditing-investigatingthe-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/. ; Ribeiro M.T., Singh S. and Guestrin C., "Why Should I Trust You?" Explaining the Predictions of AnyClassifier, "22nd ACM SIGKDD International Conference 2016, San Francisco", https://www. kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf. ; Rodrigues R., Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, "Journal of Responsible Technology" 2020, vol. 4. ; Roselli D., Matthews J., Talagala N., Managing Bias in AI, "Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA", May 2019. ; Rutkowski L., Metody i techniki sztucznej inteligencji, Warsaw 2012. ; White Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, Brussels 2020. ; Yapo A. and Weiss J., Ethical Implications of Bias In Machine Learning, "Proceedings of the Annual Hawaii International Conference on System Sciences" 2018. ; Zuiderveen Borgesius F., Discrimination, artificial intelligence and algorithmic decision-making, Council of Europe, Strasbourg 2018. ; 26
BASE
SSRN
Working paper
In: SpringerBriefs in Ethics
1. Introduction -- 2. What is Artificial Intelligence? -- 2.1. Definitions -- 2.2. History -- 2.3. State of play and future prospects -- 3. Bioethical inquiries about artificial intelligence -- 3.1. Bioethical issues common to weak and strong artificial intelligence -- 3.2. Bioethical issues resulting from strong artificial intelligence -- 3.2.1. Ontological discussions -- 3.2.2. Consequential discussions -- 4. Medicine and artificial intelligence -- 4.1. Use of artificial in health services -- 4.2. Main challenges in medical ethics -- 4.2.1. Confidentiality and privacy -- 4.2.2. Compassion, veracity and fidelity -- 4.2.3. Communication skills and case based approach -- 5. Conclusion.
Artificial Intelligence in Highway Safety provides cutting-edge advances in highway safety using AI. The author is a highway safety expert. He pursues highway safety within its contexts, while drawing attention to the predictive powers of AI techniques in solving complex problems for safety improvement. This book provides both theoretical and practical aspects of highway safety. Each chapter contains theory and its contexts in plain language with several real-life examples. It is suitable for anyone interested in highway safety and AI and it provides an illuminating and accessible introduction to this fast-growing research trend. Material supplementing the book can be found at https://github.com/subasish/AI_in_HighwaySafety. It offers a variety of supplemental materials, including data sets and R codes.
In: CLSR-D-22-00301
SSRN
"'Artificial Intelligence in Highway Safety' is a cutting-edge book to the hurtling world of AI in the field of highway safety. The author Subasish Das, a highway safety expert, pursues highway safety within its contexts, while drawing attention to the predictive powers of the AI techniques in solving complex problems for safety improvement. This book provides both theoretical and practical aspects of highway safety. Each chapter contains theory and its contexts in plain language with several real-life examples. This book is suitable for anyone interested in highway safety and AI and provides an illuminating and accessible introduction to this fast-growing research trend. The book's website at http://subasish.github.io/AIiHS offers a variety of supplemental materials, including data sets and R codes. 'Artificial Intelligence in Highway Safety' is a cutting-edge book to the hurtling world of AI in the field of highway safety. This book provides both theoretical and practical aspects of highway safety. Each chapter contains theory and its contexts in plain language with several real-life examples"--
Artificial Intelligence in Highway Safety provides cutting-edge advances in highway safety using AI. The author is a highway safety expert. He pursues highway safety within its contexts, while drawing attention to the predictive powers of AI techniques in solving complex problems for safety improvement. This book provides both theoretical and practical aspects of highway safety. Each chapter contains theory and its contexts in plain language with several real-life examples. It is suitable for anyone interested in highway safety and AI and it provides an illuminating and accessible introduction to this fast-growing research trend. Material supplementing the book can be found at https://github.com/subasish/AI_in_HighwaySafety. It offers a variety of supplemental materials, including data sets and R codes
In: International Journal of Forensic Research, Artificial Intelligence in Forensic Science, In J Fore Res, 4(1), 172-173
SSRN
In: Moral philosophy and politics, Band 9, Heft 2, S. 219-237
ISSN: 2194-5624
Abstract
For their deployment in human societies to be safe, AI agents need to be aligned with value-laden cooperative human life. One way of solving this "problem of value alignment" is to build moral machines. I argue that the goal of building moral machines aims at the wrong kind of ideal, and that instead, we need an approach to value alignment that takes seriously the categorically different cognitive and moral capabilities between human and AI agents, a condition I call deep agential diversity. Domestication is the answer to a similarly structured problem: namely, how to integrate nonhuman animals that lack moral agency safely into human society and align their behavior with human values. Just like nonhuman animals, AI agents lack a genuinely moral agency; and just like nonhuman animals, we might find ways to train them to nevertheless assist us, and live and work among us – to "domesticate" them, in other words. I claim that the domestication approach does well in explaining many of our intuitions and worries about deploying AI agents in our social practices.
In: The Journal of the Legal Writing Institute (2022)
SSRN
In: Computers and Electronics in Agriculture, Band 40, Heft 1-3, S. 1-3