This Handbook offers a lively, authoritative, up-to-date exploration of pressing ethical issues in our digital world. An international team of philosophers give critical appraisals of research in the this fast-growing field, exploring novel approaches and arguments that will shape the agenda on digital ethics for years to come.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Privacy matters because it shields us from possible abuses of power. In the age of AI and the internet it is more important than ever. Carissa Véliz offers a much needed philosophical account of privacy by exploring five basic questions: What is privacy? Where does it come from? Why does it matter? What should we do about it? Where are we now?.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Anonymity promotes free speech by protecting the identity of people who might otherwise face negative consequences for expressing their ideas. Wrongdoers, however, often abuse this invisibility cloak. Defenders of anonymity online emphasise its value in advancing public debate and safeguarding political dissension. Critics emphasise the need for identifiability in order to achieve accountability for wrongdoers such as trolls. The problematic tension between anonymity and identifiability online lies in the desirability of having low costs (no repercussions) for desirable speech and high costs (appropriate repercussions) for undesirable speech. If we practice either full anonymity or identifiability, we end up having either low or high costs in all online contexts and for all kinds of speech. I argue that free speech is compatible with instituting costs in the form of repercussions and penalties for controversial and unacceptable speech. Costs can minimise the risks of anonymity by providing a reasonable degree of accountability. Pseudonymity is a tool that can help us regulate those costs while furthering free speech. This article argues that, in order to redesign the Internet to better serve free speech, we should shape much of it to resemble an online masquerade.
"Perhaps the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is. Consider, for example, the familiar proposal that the right to privacy is the right "to be let alone." On the one hand, this doesn't seem to take in enough. The police might say, "We grant we used a special X-ray device on Smith, so as to be able to watch him through the walls of his house; we grant we trained an amplifying device on him so as to be able to hear everything he said; but we let him strictly alone: we didn't touch him, we didn't even go near him-our devices operate at a distance." Anyone who believes there is a right to privacy would presumably believe that it has been violated in Smith's case; yet he would be hard put to ex-plain precisely how, if the right to privacy is the right to be let alone. And on the other hand, this account of the right to privacy lets in far too much. If I hit Jones on the head with a brick I have not let him alone. Yet, while hitting Jones on the head with a brick is surely violating some right of Jones', doing it should surely not turn out to violate his right to privacy. Else, where is this to end? Is every violation of a right a violation of the right to privacy? It seems best to be less ambitious, to begin with at least. I suggest, then, that we look at some specific, imaginary cases in which people would say, "There, in that case, the right to privacy has been violated," and ask ourselves precisely why this would be said, and what, if anything, would justify saying it"--
ABSTRACTMany are calling for concrete mechanisms of oversight for health research involving artificial intelligence (AI). In response, institutional review boards (IRBs) are being turned to as a familiar model of governance. Here, we examine the IRB model as a form of ethics oversight for health research that uses AI. We consider the model's origins, analyze the challenges IRBs are facing in the contexts of both industry and academia, and offer concrete recommendations for how these committees might be adapted in order to provide an effective mechanism of oversight for health‐related AI research.