Malware and Machine Learning
In: Intelligent Methods for Cyber Warfare; Studies in Computational Intelligence, S. 1-42
33 Ergebnisse
Sortierung:
In: Intelligent Methods for Cyber Warfare; Studies in Computational Intelligence, S. 1-42
In: Proceedings of the Weizenbaum Conference 2019 "Challenges of Digital Inequality - Digital Education, Digital Work, Digital Life"
Recently, a host of propositions for guidelines for the ethical development and use of artificial intelligence (AI) has been published. This body of work contains timely contributions for sensitizing developers to the ethical and societal implications of their work. However, a sustained embedding of ethics in largely algorithm-based technology development, research and studies requires a precise framing of the origins of the new vulnerabilities created. Recently, scholars have been referring to ethics associated with technology that is in some way "opaque" to at least part of its associated stakeholders. This "opacity" can take several forms which will be discussed in this paper. There are various ways in which such an opacity can create vulnerabilities and, hence, relevant ethical, societal, epistemic and regulatory challenges. This paper provides a non-exhaustive list of examples in healthcare that call for educational resources and consideration in development processes that try to reveal and counter these opacities.
In: Controlling Uncertainty, S. 80-114
In: Computational Sustainability; Studies in Computational Intelligence, S. 13-29
In: Advances in Multimedia Information Processing — PCM 2002; Lecture Notes in Computer Science, S. 928-935
In: Proceedings of the Weizenbaum Conference 2019 "Challenges of Digital Inequality - Digital Education, Digital Work, Digital Life"
This paper centrally asks for the ways in which ubiquitous, ever new digital technologies of 'our' everyday lives transform learning at the digital human-machine interface from the perspective of feminist science and technology studies. How to account for emerging forms of interwoven human and machine learning? Suggesting the term of learning cultures in approaching this question, the paper emphasizes an understanding of learning not as a proficiency of an entity embodying either natural or artificial intelligence, but rather as a culturally situated and materially enacted process. In so doing, the paper brings together recent impulses that suggest a re-conceptualization of learning, e.g. through the notion of "machine learners" (Mackenzie 2017) or that of "posthuman learning (Hasse 2018)". Reading these insights together, I will finally suggest an account of becoming responsible for learning cultures of digital technologies through a reconsidered notion of interwoven human/machine learning.
In: Advances in Group Decision and Negotiation; Programming for Peace, S. 227-250
In: Challenges and perspectives of hate speech research, S. 355-369
The advent of social media has increased digital content - and, with it, hate speech. Advancements in machine learning help detect online hate speech at scale, but scale is only one part of the problem related to moderating it. Machines do not decide what comprises hate speech, which is part of a societal norm. Power relations establish such norms and, thus, determine who can say what comprises hate speech. Without considering this data-generation process, a fair automated hate speech detection system cannot be built. This chapter first examines the relationship between power, hate speech, and machine learning. Then, it examines how the intersectional lens - focusing on power dynamics between and within social groups - helps identify bias in the data sets used to build automated hate speech detection systems.
In: Modeling Decisions for Artificial Intelligence; Lecture Notes in Computer Science, S. 70-80
In: Fuzzy Sets and Their Extensions: Representation, Aggregation and Models; Studies in Fuzziness and Soft Computing, S. 431-452
In: Advances in Group Decision and Negotiation; Programming for Peace, S. 251-318
In: Intelligence and Security Informatics; Lecture Notes in Computer Science, S. 362-367
In: Modeling Decisions for Artificial Intelligence; Lecture Notes in Computer Science, S. 92-103
In: Proceedings of the Weizenbaum Conference 2022: Practicing Sovereignty - Interventions for Open Digital Futures, S. 53-65
Taking the notion of health as a leitmotif, this paper discusses some conceptual boundaries for using machine learning - a data-driven, statistical, and computational technique in the field of artificial intelligence - for epistemic purposes and for generating knowledge about the world based solely on the statistical correlations found in data (i.e., the "End of Theory" view).The thrust of the argument is that prior theoretical conceptions, subjectivity, and values would - because of their normative power - inevitably blight any effort at knowledge-making that seeks to be exclusively driven by data and nothing else. The conclusion suggests that machine learning will neither resolve nor mitigate the serious internal contradictions found in the "biostatistical theory" of health - the most well-discussed data-driven theory of health. The definition of notions such as these is an ongoing and fraught societal dialogue where the discussion is not only about what is, but also about what should be. This dialogical engagement is a question of ethics and politics and not one of mathematics.
In: Fuzzy Sets and Their Extensions: Representation, Aggregation and Models; Studies in Fuzziness and Soft Computing, S. 357-375