The public health crisis that occurred during the current COVID-19 pandemic has created pressure for some of the affected people. Therefore, the privacy of COVID-19 patient data needs to be protected by the government. The purpose of this study was to examine the Privacy Regulations for the Protection of COVID-19 Patient Data. This research is a literature review research or what is known as a literature review. This research was conducted by analyzing 6 articles published between 2020-2021 regarding the protection of data privacy for COVID-19 patients in several national and international journals. The results of the research analysis show that the potential for misuse of COVID-19 patient data can lead to discrimination and exclusivity of all parties involved, including an increase in excessive public fear. Thus, certain legal frameworks should be adopted to reduce the risk of misuse of personal data of COVID-19 patients.
China's Social Credit System (CSC) is a reputation system adopted by the Government of the Peoples' Republic of China that establish a mechanism of rewarding and imposing punishment to its citizen, by taking into account the behavioral performance and compliance to the law and regulation. This article aimed to reviews the concept and the scope of implementation of China's SCS and to analyze the possibility of the Indonesian Government to adopt it into Indonesia's legal system and legal culture. This article reflects a doctrinal legal research that collects primary and secondary sources and uses statutory, comparative, and analytical approaches. This article found that SCS basically aims at creating a comprehensive data based-system to improve the citizen's behavior based on the scoring system that entails rewards and sanctions. The widely accepted of this system by the citizen of PRC, however, does not correspond to a remaining legal issue about the lack of protection of privacy rights, particularly regarding the naming and shaming of a blacklisted person. This article suggests that Indonesia may adopt some China's SCS aspects, including the basic idea of the upgrading of the standard of citizen behavior, the establishment of a comprehensive system that integrating all data, and partial adoption of the data analysis. However, it seems that the naming and shaming for blacklisted persons do not relevant to be adopted by the Indonesian government in the near future, in a consideration of the legal culture in Indonesia and the communal life of the Indonesian peoples.
AbstractThis article presents a perspective that the interplay between high-level ethical principles, ethical praxis, plans, situated actions, and procedural norms influences ethical AI practices. This is grounded in six case studies, drawn from fifty interviews with stakeholders involved in AI governance in Russia. Each case study focuses on a different ethical principle—privacy, fairness, transparency, human oversight, social impact, and accuracy. The paper proposes a feedback loop that emerges from human-AI interactions. This loop begins with the operationalization of high-level ethical principles at the company level into ethical praxis, and plans derived from it. However, real-world implementation introduces situated actions—unforeseen events that challenge the original plans. These turn into procedural norms via routinization and feed back into the understanding of operationalized ethical principles. This feedback loop serves as an informal regulatory mechanism, refining ethical praxis based on contextual experiences. The study underscores the importance of bottom-up experiences in shaping AI's ethical boundaries and calls for policies that acknowledge both high-level principles and emerging micro-level norms. This approach can foster responsive AI governance, rooted in both ethical principles and real-world experiences.
The demands of social democratic government, the growth of electronic commerce, and the advance of technology have fueled the debate over internet privacy. Technology offers unprecedented opportunities but can also become tools of abuse. Debate in the United States centers around the conflicting interests of industry self-control versus government regulation. Technological and market-based solutions are ineffective because they lead to inadequate and inconsistent protection. Many user-driven privacy choices can impede the growth of consumer trust. Voluntarily adopted privacy policies are either extremely limited or easily circumvented with tracking technology that allows no consumer control over the collection of their personal data. Incompatible national standards can disrupt data flow. The United States could address these concerns by shifting away from its industry and state-based regulatory model to one based on fair information practices, with oversight assigned to a single agency controlled by a U.S. Privacy Commissioner who could work with international officials to resolve complex privacy issues.
The demands of social democratic government, the growth of electronic commerce, and the advance of technology have fueled the debate over internet privacy. Technology offers unprecedented opportunities but can also become tools of abuse. Debate in the United States centers around the conflicting interests of industry self-control versus government regulation. Technological and market-based solutions are ineffective because they lead to inadequate and inconsistent protection. Many user-driven privacy choices can impede the growth of consumer trust. Voluntarily adopted privacy policies are either extremely limited or easily circumvented with tracking technology that allows no consumer control over the collection of their personal data. Incompatible national standards can disrupt data flow. The United States could address these concerns by shifting away from its industry and state-based regulatory model to one based on fair information practices, with oversight assigned to a single agency controlled by a U.S. Privacy Commissioner who could work with international officials to resolve complex privacy issues.
Parents and legislators have become increasingly concerned about access by children to Internet websites. The Communications Decency Act was the first to attempt to address the issue, but it failed to pass a constitutional challenge. Logically, the next step is to rate websites on the Internet. However, mandatory rating legislation will force providers to rate their sites using rating vocabularies such as RSACi, which they might not agree with. A voluntary rating system or old fashioned parental authority seems like a good solution, but both are based on a subjective determination that everyone may not agree with. Society has a legitimate interest in protecting children from danger. Parents must be given effective tools to regulate the content of what their children see on the Internet. Stand-alone blocking software seems to be the best option available to parents today. However, blocking software companies have 'block lists' which are not disclosed to the purchaser of the software. The disclosure of the 'block lists', as well as allowing users to block and unblock websites at their own discretion, is currently the most effective alternative available to parents. Ultimately, parents who use stand-alone blocking software will be able to block material which they deem to be offensive and not to be viewed by children. Providers will not be required to rate their speech and will be free to publish whatever they would like. Children will also be protected from the indecent speech and materials readily available on the Internet.
Parents and legislators have become increasingly concerned about access by children to Internet websites. The Communications Decency Act was the first to attempt to address the issue, but it failed to pass a constitutional challenge. Logically, the next step is to rate websites on the Internet. However, mandatory rating legislation will force providers to rate their sites using rating vocabularies such as RSACi, which they might not agree with. A voluntary rating system or old fashioned parental authority seems like a good solution, but both are based on a subjective determination that everyone may not agree with. Society has a legitimate interest in protecting children from danger. Parents must be given effective tools to regulate the content of what their children see on the Internet. Stand-alone blocking software seems to be the best option available to parents today. However, blocking software companies have 'block lists' which are not disclosed to the purchaser of the software. The disclosure of the 'block lists', as well as allowing users to block and unblock websites at their own discretion, is currently the most effective alternative available to parents. Ultimately, parents who use stand-alone blocking software will be able to block material which they deem to be offensive and not to be viewed by children. Providers will not be required to rate their speech and will be free to publish whatever they would like. Children will also be protected from the indecent speech and materials readily available on the Internet.
In the last few years, there has been a dramatic increase in the use of remote-controlled copters or "drones" by recreational users to capture aerial photographs and videos on an unprecedented scale. The convergence of cutting-edge technological developments in gyroscopic gimbals, long-range wireless transmissions, GPS-enabled stabilisation and flightpath-preprogramming, first-person-views, and compact digital imaging has led to the proliferation of these camera-carrying devices that even hobbyists can pilot with reasonable safety. However, there has been a consistent stream of public concern relating to issues of safety, privacy, and disruption of commercial interests. Lost in the paranoid cacophony is a question that warrants proper legislative reflection: how can these drones be regulated in a way that is proportionate and sensible? With Singapore's recently enacted Unmanned Aircraft Act as the focal point, this article will compare and contrast the various regulations around the world to determine where the best balance has been struck between the freedom to create art and the purported competing demands of safety, privacy, and commercial interests.
AbstractIn recent years both the United States and China have introduced data protection and cybersecurity frameworks to address the social, economic, and political challenges posed by the platform economy. While much of the discourse has focused on the implementation of these frameworks through top‐down, centrally administered institutions, little discussion has turned to the role of platform regulation from the bottom up—that is, the ability of individuals to affect corporate behavior through the courts as litigants. This paper attempts to address that gap by offering a comparative analysis of the role of data protection litigation in both the United States and China as one institutional mechanism through which governments may pursue their larger regulatory goals. Privacy and data protection laws in both countries grant individuals enforceable rights against other private and public actors. While litigants in the United States and China face barriers to affect corporate behavior, they do so for notable dissimilar reasons due to the unique and historical differences of the legal systems in both countries. Despite these differences, courts in both the United States and China will play an increasingly important role in platform regulation
I noticed during the last years of my research that when it comes to data law, people are only interested into data protection, human rights etc.As a result, data law is often reduced as data protection law or, even worse, as "privacy law". Such a point of view is not wrong because data law do protect – or at least genuinely tries to protect – privacy, human rights etc. But by doing so, one is likely to refer only to a small part of data law and, what is worse, not to the essential part of it.Before going further, let me be clear about something: my point is not to minimize the interest of data protection. My point is to cast some light upon the dark side of data law: the freedom of data processing. More precisely, my point is that although data protection is important, it has not led legislators to adopt instruments promoting personal data secrecy.As a matter of fact, it is quite the opposite. Indeed, everyone can notice in his or her everyday life that data aw instruments do not prevent from personal data processing. And there is, in my opinion, the real purpose of data law instruments: promoting personal data processing by giving them legal security. In other words, paving them the legal way.More precisely, data law instruments aim at setting legal frame for data processing system and – thus – to computer science, so that its full development can be compatible with human rights. In other words, data law instruments try to humanize computer science uses, not to annihilate them. ; L'objet de ce travail est de démontrer que le droit des données personnelles ne se réduit pas au droit à la protection des données personnelles. En effet, il existe une thèse dominante au terme de laquelle le droit des données personnelles a pour objet exclusif la protection de la personne fichée et constitue, en tant que tel, un élément de la protection de la vie privée.Or, une telle lecture procède d'une isolation clinique de certaines dispositions essentielles au sein des instruments relatifs aux données personnelles. Le droit positif invalide cette thèse et révèle que ces normes poursuivent deux enjeux distincts. Le premier tient effectivement à la protection de la personne fichée et a à ce titre été isolé au sein d'un droit à la protection des données personnelles en droit de l'Union européenne. Le second tient dans l'usage sans entraves de l'informatique et implique la liberté de traiter des données personnelles.Au sein des instruments juridiques relatifs aux données personnelles, ces deux intérêts ne constituent pas deux objectifs de rang et d'intérêt égal juxtaposés l'un à côté de l'autre. Ils sont articulés et hiérarchisés. Le but premier de ces instruments est de garantir la liberté de traitement des données personnelles en tant qu'elle conditionne la liberté de l'usage de procédés informatiques. La protection des droits et libertés fondamentales des personnes fichées n'en constitue que la limite, étroitement subordonnée et circonscrite à ce but principal.De ce constat, il est possible de déduire que les instruments juridiques relatifs aux données personnelles constituent une même police administrative spéciale. Cette police a pour but de consacrer et d'aménager une liberté publique à ce jour innommée: la liberté de traitement des données personnelles. Elle a pour effet et non pour objet de protéger la personne fichée, non pas seulement parce que cette dernière serait titulaire d'un droit fondamental mais aussi et surtout au titre de la protection d'un ordre public spécial.
Video surveillance, the monitoring of a specific area, event, activity or person through an electronic device or a system for visual monitoring is already established as a central tool of public security policy. Video surveillance represents a starting point for implementing advanced technologies such as automatic number plate recognition (ANPR) and automatic facial recognition (AFR), which tend to become standards in many urban areas. Based on the increased use of video surveillance technologies, governments and private actors' capabilities in terms of monitoring of the population and potentially violating fundamental human rights are colossally increased. The article will provide a comparative analysis of national regulatory frameworks of video surveillance in public spaces in former Yugoslav states and its compliance with standards provided by new data protection regulatory framework, particularly General Data Protection Regulation (GDPR). The article will also give an overview of the major violations of the right to privacy by video surveillance and insight into and potential impact of new projects and technologies currently under deployment in the observed countries.
Defence date: 09 February 2018 ; Examining Board: Professor Deirdre Curtin, European University Institute; Professor Emeritus Marise Cremona, European University Institute; Professor Julia Hörnle, Queen Mary University of London; Professor Claudia Diaz, KU Leuven ; With citizens' movements mediated by many technologies that aid our navigation the potential for omnipresent surveillance may potentially institute fundamental changes to the human condition. Locational privacy is pivotal in developing inter-personal associations and relational ties with others and its function is therefore complex, rather than solely affording a degree of independence from the observations made by others. In this respect, a more nuanced understanding of the utility of location data is required; the current hierarchy that delineates personal data from special categories of personal data does not adequately appreciate the capacity for location data to act as a proxy for other sensitive personal data. Furthermore, the binary distinction that reflects the conceptualisation of the right to privacy as a negative right, with related concepts such as identity and personality formation viewed as positive constructs, is increasingly difficult a notion to preserve. The classification and terminology of technologies can illustrate how terms and legal metaphors are developed and applied so as to bridge gaps in applying existing context and precedent. Though the designation 'location data' once constituted a reasonable accommodation in nomenclature as an intelligible and easily comprehensible term, even while constituting a significant oversimplification of the data it represented, technological advances have rendered the term increasingly problematic. This study asks whether the existing legal framework at the regional level in Europe is apt to provide sufficiently cogent and coherent regulation given recent developments in technologies. The review analyses the risks associated with this predilection in data processing activities that allows for the identification of ever more intimate and nuanced details of a citizen's life, behaviours and convictions through the analysis of their location data; in turn, it shall discern the necessity of considering the resulting impacts on citizens' fundamental rights to privacy and personal data protection. ; Research conducted within the scope of this doctoral thesis was completed in the SURVEILLE project, a project co-funded by the European Commission within the Seventh Framework Programme. This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no. 284725.