Editor's Note
In: Bulletin of science, technology & society, Band 28, Heft 1, S. 3-3
ISSN: 1552-4183
16 Ergebnisse
Sortierung:
In: Bulletin of science, technology & society, Band 28, Heft 1, S. 3-3
ISSN: 1552-4183
In: Studies in social justice, Band 15, Heft 3, S. 397-413
ISSN: 1911-4788
Information posted by youth in online social media contexts is regularly accessed, downloaded, integrated, and analyzed by academic researchers. The practice raises significant social justice considerations for researchers including issues of representation and equitable distribution of risks and benefits. Use of this type of data for research purposes helps to ensure representation in research of the voices of (sometimes marginalized) youth who participate in these online contexts, at times discussing issues that are also under-represented. At the same time, youth whose data are harvested are subject (often without notice or consent) to the risks associated with this research, while receiving little if any direct benefit from the work. These risks include the potential loss of online social community as well as threats to participant rights and wellbeing. This paper explores the tension between the social justice benefit of representation and considerations that would suggest caution, the latter including inequitable distribution of research-related costs and benefits, and the traditional ethics concerns of participant autonomy and privacy in the context of youth participation in online discussions. In the final section, we propose guidelines and considerations for the conduct of online social media research to assist researchers to balance and respect representational and participant rights or wellbeing considerations, especially with youth.
In: (2021) 5(3) Journal of Gender Based Violence 531
SSRN
In: Ottawa Law Review, Band 48, Heft 1
SSRN
In: New media & society: an international and interdisciplinary forum for the examination of the social dynamics of media and information change, Band 20, Heft 3, S. 845-861
ISSN: 1461-7315
Earlier research using qualitative techniques suggests that the default conception of online social networks is as public spaces with little or no expectation of control over content or distribution of profile information. Some research, however, suggests that users within these spaces have different perspectives on information control and distribution. This study uses Q methodology to investigate subjective perspectives with respect to privacy of, and control over, Facebook profiles. The results suggests three different types of social media users: those who view profiles as spaces for controlled social display, exerting control over content or audience; those who treat their profiles as spaces for open social display, exercising little control over either content or audience; and those who view profiles as places to post personal information to a controlled audience. We argue that these different perspectives lead to different privacy needs and expectations.
In: Ottawa Faculty of Law Working Paper No. 2016-31
SSRN
Working paper
Despite the many technological advances that could benefit the court system, the use of computers and network technology to facilitate court procedures is still in its infancy, and court procedures largely remain attached to paper documents and to the physical presence of the parties at all stages. More and more research is focusing on the use of technology to make the legal system more efficient and to reduce excessive legal costs and delays. The goal of this exploratory research project is to examine the experience of justice sector technology implementation fromthe perspective of individuals involved first-hand in the implementation process. This study will provide insight into the political and cultural factors that support and hinder the implementation of technologies in the justice sector. Unstructured interviews were conducted with individuals involved in the planning and implementation of technological change in Canadian courts in order to gather their perspectives on the change process. These key informants were asked to discuss the process of technological change in their courts, the barriers that they experienced to such technological change, and the factors that promote or support the implementationof technology by courts. A grounded theory approach was used to identifyemergent themes related to these questions. The results provide insight into the factors that promote and impede the implementation of technologies by Canadian courts.
BASE
Researchers in psychology have long known that preferences are constructed in the decision-making process, influenced by choice environments that trigger unconscious biases and heuristics. As a result, choices, including those of voters, can be manipulated by political information. Personalised political messages, designed to influence based on detailed personal profiles, can undermine voter autonomy. We suggest that these practices should therefore be regulated, and discuss policy options and approaches, specifically the appropriate balance between freedom of political speech and privacy rights and interests, the implications of voter analytics for the electoral process, and how and by whom sophisticated voter analytics practices should be regulated.
BASE
In the last year and a half, deepfakes have garnered a lot of attention as the newest form of digital manipulation. While not problematic in and of itself, deepfake technology exists in a social environment rife with cybermisogyny, toxic-technocultures, and attitudes that devalue, objectify, and use women's bodies against them. The basic technology, which in fact embodies none of these characteristics, is deployed within this harmful environment to produce problematic outcomes, such as the creation of fake and non-consensual pornography. The sophisticated technology and metaphysical nature of deepfakes as both real and not real (the body of one person, the face of another) makes them impervious to many technical, legal, and regulatory solutions. For these same reasons, defining the harm deepfakes causes to those targeted is similarly difficult and very often targets of deepfakes are not afforded the protection they require. We argue that it is important to put an emphasis on the social and cultural attitudes that underscore the nefarious use of deepfakes and thus to adopt a more material-based approach, opposed to technological, to understanding the harm presented by deepfakes.
BASE
In: Canadian journal of law and society: Revue canadienne de droit et société, Band 15, Heft 1, S. 81-110
ISSN: 1911-0227
AbstractThis article examines the effect that cultural and technological changes have had on interpersonal communication and aims to provide an interdisciplinary explanation for the recent proliferation of defamation in electronic media. The authors argue that the absence of certain extra-linguistic cues and established cultural convention in the electronic environment often results in miscommunication which — if not itself defamatory — gives rise to emotional exchanges between interlocutors in a manner that provokes defamation. The authors begin their analysis with a discussion of defamation law as a recipient-oriented tort, demonstrating the importance of the context of communication in the determination of whether a particular remark carries a defamatory sense. In order to better understand how an online communication is received and understood by its recipients, the authors then investigate three differences between electronic and other media of communications: i) that the technology-mediated and text-bases character of electronic communication makes the process of communication more difficult and the incidence of miscommunication more likely; ii) that the nature of social interaction in the online setting has a tendency to increase hostile communications that might be considered defamatory; iii) that the cultural context and standards of communication that develop in online communities will reduce the significance of these hostile communications. Applying these considerations to the law of defamation, the authors conclude by rejecting the naive point of view that a libel published through the Internet ought to be dealt with in exactly the same way that a libel published in a newspaper is dealt with. The authors end by calling for further empirical research about the content that is produced as a consequence of contextual challenges in electronic communication.
In: Bulletin of science, technology & society, Band 30, Heft 2, S. 130-143
ISSN: 1552-4183
The goal of this project is to identify guidelines for privacy policies that children and teens can accurately interpret with relative ease. A three-pronged strategy was used to achieve this goal. First, an analysis of the relevant literature on reading was undertaken to identify the document features that affect comprehension. Second, focus groups were conducted to examine their experience and practices in the interpretation of privacy policies found on sites that have been identified as favorite kids' sites. Based on the results of the literature review and focus groups, a set of potential guidelines were identified. Finally, the efficacy of these guidelines was tested in the final phase of the research project. The result of this work is a set of 14 guidelines for the drafting of privacy policies that make a difference, by improving the comprehensibility of privacy policies encountered by Canadian children and teens as they surf the Net.
In: Windsor Yearbook of Access to Justice, Band 31, Heft 2
SSRN
Working paper
In: Karim Benyekhlef, Jane Bailey, Jacquelyn Burkell & Fabien Gélinas (eds), eAccess to Justice, Ottawa, University of Ottawa Press, 2016.
SSRN
Working paper
In: Windsor Yearbook of Access to Justice, Band 33, Heft 2
SSRN
In: in Florian Martin-Bariteau & Teresa Scassa, eds., Artificial Intelligence and the Law in Canada (Toronto: LexisNexis Canada, 2021)
SSRN
Working paper