Bill introduced by the Texas House of Representatives relating to the privacy of personal identifying information and the creation of the Texas Privacy Protection Advisory Council.
Recent scandals on the abuse of personal information from social media platforms and numerous user identity data breaches raise concerns about technical, commercial, and ethical aspects of privacy and security of user data. European Union's new General Data Protection Regulation (GDPR) is one of the largest changes in data privacy regulation and entails several key regulatory measures for both data controllers and data processors to empower and protect EU citizens' privacy. In this research work, we propose a conceptual design and high-level architecture for a Blockchain-based Personal Data and Identity Management System (BPDIMS), a human-centric and GDPR-compliant personal data and identity management system based on the blockchain technology. We describe how BPDIMS's architecture utilizes blockchain technology to provide a high-level of security, trust and transparency. We discuss how BPDIM's human-centric approach with GDPR compliance shifts the control over personal data to the end users and empowers them better.
Introduction: The first regulatory rulings by the U.S. Food and Drug Administration on direct-to-consumer (DTC) genetic testing services are expected soon. As the process of regulating these and other genetic tests moves ahead, it is important to understand the preferences of DTC genetic testing customers about the regulation of these products. Methods: An online survey of customers of three DTC genetic testing companies was conducted 2–8 months after they had received their results. Participants were asked about the importance of regulating the companies selling DTC genetic tests. Results: Most of the 1,046 respondents responded that it would be important to have a nongovernmental (84%) or governmental agency (73%) monitor DTC companies' claims to ensure the consistency with scientific evidence. However, 66% also felt that it was important that DTC tests be available without governmental oversight. Nearly, all customers favored a policy to ensure that insurers and law enforcement officials could not access their information. Discussion: Although many DTC customers want access to genetic testing services without restrictions imposed by the government regulation, most also favor an organization operating alongside DTC companies that will ensure that the claims made by the companies are consistent with sound scientific evidence. This seeming contradiction may indicate that DTC customers want to ensure that they have unfettered access to high-quality information. Additionally, policies to help ensure privacy of data would be welcomed by customers, despite relatively high confidence in the companies.
I noticed during the last years of my research that when it comes to data law, people are only interested into data protection, human rights etc.As a result, data law is often reduced as data protection law or, even worse, as "privacy law". Such a point of view is not wrong because data law do protect – or at least genuinely tries to protect – privacy, human rights etc. But by doing so, one is likely to refer only to a small part of data law and, what is worse, not to the essential part of it.Before going further, let me be clear about something: my point is not to minimize the interest of data protection. My point is to cast some light upon the dark side of data law: the freedom of data processing. More precisely, my point is that although data protection is important, it has not led legislators to adopt instruments promoting personal data secrecy.As a matter of fact, it is quite the opposite. Indeed, everyone can notice in his or her everyday life that data aw instruments do not prevent from personal data processing. And there is, in my opinion, the real purpose of data law instruments: promoting personal data processing by giving them legal security. In other words, paving them the legal way.More precisely, data law instruments aim at setting legal frame for data processing system and – thus – to computer science, so that its full development can be compatible with human rights. In other words, data law instruments try to humanize computer science uses, not to annihilate them. ; L'objet de ce travail est de démontrer que le droit des données personnelles ne se réduit pas au droit à la protection des données personnelles. En effet, il existe une thèse dominante au terme de laquelle le droit des données personnelles a pour objet exclusif la protection de la personne fichée et constitue, en tant que tel, un élément de la protection de la vie privée.Or, une telle lecture procède d'une isolation clinique de certaines dispositions essentielles au sein des instruments relatifs aux données personnelles. Le droit ...
Nowadays, millions of companies and billions of users worldwide rely on networks either wireless or wired for their daily work and entertainment. Due to the lack of privacy-by-design and the absence of strong security mechanisms, there are multiple ways for malicious users to penetrate networks and systems. Ubiquitous Networking and Global Internet, which has become more portable and accessible than ever before through private and publicly available IT infrastructures, make unauthorized access more feasible. This also generates serious security and privacy concerns due to a number of ensuing cyber threats, especially in case of Internet access via public Wi-Fi networks. In the described context, Internet security should and can play an important role towards protecting our everyday lives and online interactions. Yet, most users are unaware of these threats and the extent to which their privacy might be compromised. Regulations, such as the General Data Protection Regulation (GDPR), have been established to safeguard and improve the privacy and security of users and IT infrastructures, enforcing the installation of adequate cybersecurity measures. The application of regulations such as the GDPR is considered an issue of vital importance protecting the privacy and ensuring the security of IT infrastructures and websites, of data controllers and processors, both inside and outside the European Union. Such regulations may act as a useful tool set, which, among other requirements, mandates the adoption of privacy (and security)-by-design. While the GDPR implies a minimum set of technical Internet Security means to be taken into consideration by companies and organizations to achieve compliance, it is of high importance to highlight the adaptation of strong security mechanisms that will not only set companies compliant with the GDPR but also maintain them strong and resilient against multiple cyber threats. In the present thesis, a big set of issues on privacy and security are analyzed, offering solutions to the ...
AbstractWe introduce a model of (platform‐mediated) many‐to‐many matching in which agents' preferences are both vertically and horizontally differentiated. We first show how the model can be used to derive the profit‐maximizing matching plans under customized pricing. We then investigate the implications for targeting and welfare of uniform pricing (be it explicitly mandated or induced by privacy regulation), preventing the platform from conditioning prices on agents' profiles. The model can be applied to study ad exchanges, online retailing, and media markets.
AbstractBuilding on the concept of participatory regulation, this study emphasizes recognizing the multidimensional character of citizens' risk regulation preferences. Using the case of autonomous vehicles, we specify six technology‐related risks: product safety, regulatory oversight, legal liability, ethical prioritization, data protection, and human supervision. We argue that differences in these multidimensional risk regulation preferences are shaped by citizens' political beliefs, technology attitudes, and national innovation cultures. To test these hypotheses, a conjoint experiment was conducted in the United States (1188 participants), Japan (1135 participants), and Germany (1174 participants) in which respondents compared hypothetical regulation regimes for self‐driving cars, varying alongside the six regulatory risk dimensions. The findings show a universal preference for increased legal responsibility of manufacturers and more stringent safety regulations for autonomous vehicles. Political beliefs and technological attitudes had minimal impact on these preferences. Although there were some cultural differences in privacy and ethical prioritization, no systematic differences were noted across countries, suggesting the possibility of finding common ground in standardizing risk regulations for self‐driving cars.
Nowadays, artificial intelligence is constantly and rapidly evolving, developing and influencing the lives of the people in a way it cannot be foreseen. It can be challenging to satisfy the transparency principle in the development and use of artificial intelligence. The impact of Artificial Intelligence on privacy is immense, which is why it is important to raise awareness about these issues. A lot of the products used by people nowadays that have to deal with AI systems hold numerous risks regarding data privacy and security. Consumer devices and products tend to have features that make them vulnerable to data exploitation by AI. There is a different option for gathering information such as identification and tracking, voice and facial recognition, prediction, and profiling. The concern regarding security and privacy is immense because AI is a part of everyday life: from social media newsfeeds to mediating traffic flow in cities, from autonomous cars to connected consumer devices like smart assistants, spam filters, voice recognition systems, and search engines. AI can track and predict individuals' shopping preferences, political preferences, tastes, and locations. The data accumulated and shared between these technologies have already created many controversies within the legal regulations regarding data and privacy. Artificial Intelligence and the protection of personal data are twisted together. The General Data Protection Regulation does not explicitly say about AI and how the concept of the right to be forgotten applies in regard to this ; Günümüzde yapay zeka sürekli ve hızlı bir şekilde evrilip, gelişmektedir, ayrıca insanların hayatlarını öngörülemeyen bir şekilde etkilemektedir. Gelişmedeki ve yapay zekanın kullanılmasındaki şeffaflık ilkesini tatmin edebilmek oldukça meydan okuyucu olabilmektedir. Yapay zekanın mahremiyete olan etkisi muazzam seviyededir. Bu nedenle bu konu hakkındaki farkındalığı arttırmak gerekmektedir. Günümüzde insanlar tarafından erişim sistemleriyle alakalı olan bir çok ürün kullanılmaktadır ve bu sistemler veri gizliliği ve güvenliğiyle alakalı birçok risk barındırmaktadırlar. Tüketici cihazları ve ürünleri erişim sistemleri tarafından kendilerini veri istismarına karşı savunmasız özelliklere sahip olma eğiliminde tutulmaktadır. Bilgi toplamak için kimlik saptama,- takip etme,ses ve yüz tanıması, tahmin ve profil çizmek gibi birçok seçenek bulunmaktadır. Güvenlik ve gizlilik hakkındaki endişe muazzam bir seviyededir çünkü erişim sistemleri hayatın her alanındadırlar: Sosyal medya haber akışlarından şehirlerdeki trafik akışına aracılık etmeye kadar, sürücüsüz arabalardan akıllı asistanlar,spam filtreleri, ses tanıma sistemleri ve arama motorları gibi bağlı tüketici araçlarına kadar yayılmışlardır. Erişim sistemleri bireylerin alışveriş ve siyasi tercihlerini, zevklerini, yerlerini izleyip, tahmin edebilir. Bu tür teknolijiler arasında biriktirilmiş ve paylaşılmış olan veriler çoktan hukuki düzenlemeler içinde birçok bilgi ve gizlilik odaklı tartışmalara konu olmuşlardır. Yapay zeka ve kişisel verilerin korunması birlikte kurulmuşlardır. Genel veri koruma yönetmeliği açıkça erişim sistemlerinden ve unutulma hakkı kavramının buna göre nasıl uygulanılacağından bahsetmemektedir.
Employees can cause harm to their employers through Information and Computer Technology (ICT) in employment relationships; for example, through surfing for adult material on the Internet or leaking company secrets via a mobile phone. Employers have responded to this development by introducing various surveillance systems. Besides well-known forms of Internet and e-mail surveillance, positioning systems are becoming a new trend. The influence these systems have on the employment relationship can be far-reaching, as they offer the employer an insight into the employee's whereabouts, outside the company premises as well as outside company hours. As a consequence, the boundaries between the private and the employment spheres become blurred. This raises the question of whether the introduction of ICT in employment relationships causes a shift in balance of power between employers and their employees. If so, the question is whether this shift is properly counterbalanced by the existing legal framework. This article assesses these questions from a comparative perspective, analyzing the United States and the Netherlands. Not only do tThese countries do not only greatly differ in their way of thinking about privacy and privacy regulation, but also, their basic principles and regulations of labor law are very different. Comparing these two countries will therefore provide an interesting insight into the impact of ICT on the balance of power between employers and employees. No specific legal rules are in place regarding the tracking and tracing of employees. Therefore, this article first analyzes the legal framework regarding e-mail and Internet monitoring, for which case-law provides a more or less clear framework. It then uses elements from this framework to judge whether sufficient guarantees are in place with regard to systems used to localize employees. For both the United States and the Netherlands, concern is expressed with regard to the way in which ICT seems to empower employers. Recommendations are provided to counterbalance the shift in power caused by the possibilities ICT offers employers to keep track of their employees and the lack of effective legislation in this respect.
Employees can cause harm to their employers through Information and Computer Technology (ICT) in employment relationships; for example, through surfing for adult material on the Internet or leaking company secrets via a mobile phone. Employers have responded to this development by introducing various surveillance systems. Besides well-known forms of Internet and e-mail surveillance, positioning systems are becoming a new trend. The influence these systems have on the employment relationship can be far-reaching, as they offer the employer an insight into the employee's whereabouts, outside the company premises as well as outside company hours. As a consequence, the boundaries between the private and the employment spheres become blurred. This raises the question of whether the introduction of ICT in employment relationships causes a shift in balance of power between employers and their employees. If so, the question is whether this shift is properly counterbalanced by the existing legal framework. This article assesses these questions from a comparative perspective, analyzing the United States and the Netherlands. Not only do tThese countries do not only greatly differ in their way of thinking about privacy and privacy regulation, but also, their basic principles and regulations of labor law are very different. Comparing these two countries will therefore provide an interesting insight into the impact of ICT on the balance of power between employers and employees. No specific legal rules are in place regarding the tracking and tracing of employees. Therefore, this article first analyzes the legal framework regarding e-mail and Internet monitoring, for which case-law provides a more or less clear framework. It then uses elements from this framework to judge whether sufficient guarantees are in place with regard to systems used to localize employees. For both the United States and the Netherlands, concern is expressed with regard to the way in which ICT seems to empower employers. Recommendations are provided to counterbalance the shift in power caused by the possibilities ICT offers employers to keep track of their employees and the lack of effective legislation in this respect.
Despite the differences in the understanding of the relationship between religion and law and their place in the social-normative system of the state, most experts agree that religion, along with morality and law, is a normative-regulatory system that ensures the ordering of social processes through their conscious obedience to established rules. The religiously normative attitudes of Islam, in their essence, are social imperatives, expressed in the corresponding norms of the Muslim's behavior. Although, within the framework of this article, the task is not to determine the importance of religion for the social regulation of human behavior in modern society, it should be noted that in some countries, public and political life is focused on religious traditions and is based on Islamic (Muslim) law, which is one of the main world systems of law (along with continental and Anglo-Saxon). The all-encompassing nature of Islam is clearly manifested in the Shariah, represented by a system of rules governing the religious and secular behavior of a person, including Islamic dogma and ethics, as well as the rules of human behavior in all spheres of life. The study of such norms within the framework of Sharia is a special science— fiqh. This term designates not only the specified area of Islamic knowledge, but also the rules of external behavior of people developed by it. This science is often called Islamic jurisprudence, and the norms formulated by it are called Islamic (Muslim) law, although not all of them are legal. This article is devoted to the analysis of the approach of modern Islamic legal thought about the right to privacy and an attempt to consider the institution of protection of personal data of an individual through the prism of the concept of protection of the home in Islamic law. ; Несмотря на различия в понимании соотношения религии и права и их места в социально-нормативной системе государства, большинство экспертов сойдутся во мнении, что религия наряду с моралью и правом является нормативно регулятивной системой, обеспечивающей упорядочивание общественных процессов посредством сознательного их подчинения установленным правилам. Религиозно-нормативные установки ислама по своей сути являются социальными императивами, выраженными в соответствующих нормах поведения мусульманина. Хотя в рамках рассматриваемой статьи не стоит задача определить значение религии для социальной регуляции человеческого поведения в современном обществе, следует отметить, что в отдельных странах общественная и политическая жизнь ориентирована на религиозные традиции и основана на исламском (мусульманском) праве, являющемся одной из основных мировых систем права (наряду с континентальной и англосаксонской). Всеобъемлющий характер ислама ярко проявляется в шариате, представленном системой правил, регулирующих религиозное и мирское поведение человека, и включающем в себя исламскую догматику и этику, а также правила поведения людей во всех сферах жизни. Изучением таких норм в рамках шариата занимается особая наука — фикх. Данным термином обозначается не только указанная область исламского знания, но и разрабатываемые ею правила поведения людей. Эту науку нередко называют исламской юриспруденцией, аформулируемые ею нормы— исламским (мусульманским) правом, хотя не все из них являются правовыми. Данная статья посвящена анализу подхода современной исламской правовой мысли оправе на неприкосновенность частной жизни ипопытке рассмотреть институт защиты персональных данных личности через призму концепции защиты жилища висламском праве.
This report presents the work of a European Commission Expert Group established to advise on specific ethical issues raised by driverless mobility for road transport. The report aims to promote a safe and responsible transition to connected and automated vehicles (CAVs) by supporting stakeholders in the systematic inclusion of ethical considerations in the development and regulation of CAVs. In the past few years, ethical questions associated with connected and automated vehicles (CAVs) have been the subject of academic and public scrutiny. A common narrative presents the development of CAVs as something that will inevitably benefit society by reducing the number of road fatalities and harmful emissions from transport and by improving the accessibility of mobility services. In contrast, this report applies a Responsible Research and Innovation (RRI) approach to CAVs. This approach recognises the potential of CAV technology to deliver the aforementioned benefits but also recognises that technological progress alone is not sufficient to realise this potential. To deliver the desired results, the future vision for CAVs ought to incorporate a broader set of ethical, legal and societal considerations into the development, deployment and use of CAVs. To this end, this report presents a set of 20 ethical recommendations concerning the future development and use of CAVs. These recommendations are grounded in the fundamental ethical and legal principles laid down in the EU Treaties and in the EU Charter of Fundamental Rights. ; Please cite as: Horizon 2020 Commission Expert Group to advise on specific ethical issues raised by driverless mobility (E03659). Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility. 2020. Publication Office of the European Union: Luxembourg.