Cybersecurity risk assessment approaches have served us well over the last decade. They have provided a platform through which organisations and governments could better protect themselves against pertinent risks. As the complexity, pervasiveness and automation of technology systems increases however, particularly with the Internet of Things (IoT), there is a strong argument for the need for new approaches to assess risk and build trust. The challenge with simply extending existing assessment methodologies to these systems is that we could be blind to new risks arising in such ecosystems. These risks could be related to the high degrees of connectivity present, or the coupling of digital, cyber-physical and social systems. This article makes the case for new methodologies to assess risk in this context which consider the dynamics and uniqueness of IoT, but also the rigour of best practice in risk assessment.
Organisations are coming under increasing pressure to respect and protect personal data privacy, especially with the European Union's General Data Protection Regulation (GDPR) now in effect. As legislation and regulation evolve to incentivise such data-handling protection, so too does the business case for demonstrating compliance both in spirit and to the letter. Compliance will require ongoing checks as modern systems are constantly changing in terms of digital infrastructure services and business offerings, and the interaction between human and machine. Therefore, monitoring for compliance during run-time is likely to be required. There has been limited research into how to monitor how well a system respects consents given, and withheld, pertaining to handling and onward sharing. This paper proposes a finite-state-machine method for detecting violations of preferences (consents and revocations) expressed by Data Subjects regarding use of their personal data, and also violations of any related obligations that might be placed upon data handlers (data controllers and processors). Our approach seeks to enable detection of both accidental and malicious compromises of privacy properties. We also present a concept demonstrator to show the feasibility of our approach and discuss its design and technical implementation.
The growing centrality of cybersecurity has led many governments and international organisations to focus on building the capacity of nations to withstand threats to the public and its digital resources. These initiatives entail a range of actions that vary from education and training to technology and related standards, as well as new legal and policy frameworks. While efforts to proactively address security problems seem intuitively valuable, they are new, meaning there is relatively little research on whether they achieve their intended objectives. This paper takes a cross-national comparative approach to determine whether there is empirical support for investing in capacity-building. Marshalling field research from 73 nations, the comparative data analysis: (1) describes the status of capacity-building across the nations; (2) determines the impact of capacity-building when controlling for other key contextual variables that might provide alternative explanations for key outcomes and (3) explores the factors that are shaping national advances in capacity-building. The analysis finds a low, formative status of cybersecurity capacity in most of the nations studied and also shows that relatively higher levels of maturity translate into positive outcomes for nations. The study provides empirical support to international efforts aimed at building cybersecurity capacity.
There exists unequivocal evidence denoting the dire consequences which organisations and governmental institutions face from insider threats. While the in-depth knowledge of the modus operandi that insiders possess provides ground for more sophisticated attacks, organisations are ill-equipped to detect and prevent these from happening. The research community has provided various models and detection systems to address the problem, but the lack of real data due to privacy and ethical issues remains a significant obstacle for validating and designing effective and scalable systems. In this paper, we present the results and our experiences from applying our detection system into a multinational organisation, the approach followed to abide with the ethical and privacy considerations and the lessons learnt on how the validation process refined the system in terms of effectiveness and scalability.
The threat that insiders pose to businesses, institutions and governmental organisations continues to be of serious concern. Recent industry surveys and academic literature provide unequivocal evidence to support the significance of this threat and its prevalence. Despite this, however, there is still no unifying framework to fully characterise insider attacks and to facilitate an understanding of the problem, its many components and how they all fit together. In this paper, we focus on this challenge and put forward a grounded framework for understanding and reflecting on the threat that insiders pose. Specifically, we propose a novel conceptualisation that is heavily grounded in insider-threat case studies, existing literature and relevant psychological theory. The framework identifies several key elements within the problem space, concentrating not only on noteworthy events and indicators – technical and behavioural – of potential attacks, but also on attackers (e.g., the motivation behind malicious threats and the human factors related to unintentional ones), and on the range of attacks being witnessed. The real value of our framework is in its emphasis on bringing together and defining clearly the various aspects of insider threat, all based on real-world cases and pertinent literature. This can therefore act as a platform for general understanding of the threat, and also for reflection, modelling past attacks and looking for useful patterns.
The insider threat faced by corporations and governments today is a real and significant problem, and one that has become increasingly difficult to combat as the years have progressed. From a technology standpoint, traditional protective measures such as intrusion detection systems are largely inadequate given the nature of the 'insider' and their legitimate access to prized organisational data and assets. As a result, it is necessary to research and develop more sophisticated approaches for the accurate recognition, detection and response to insider threats. One way in which this may be achieved is by understanding the complete picture of why an insider may initiate an attack, and the indicative elements along the attack chain. This includes the use of behavioural and psychological observations about a potential malicious insider in addition to technological monitoring and profiling techniques. In this paper, we propose a framework for modelling the insider-threat problem that goes beyond traditional technological observations and incorporates a more complete view of insider threats, common precursors, and human actions and behaviours. We present a conceptual model for insider threat and a reasoning structure that allows an analyst to make or draw hypotheses regarding a potential insider threat based on measurable states from real-world observations.
The I-Voting system designed and implemented in Estonia is one of the first nationwide Internet voting systems. Since its creation, it has been met with praise but also with close scrutiny. Concerns regarding security breaches have focused on in-person election observations, code reviews and adversarial testing on system components. These concerns have led many to conclude that there are various ways in which insider threats and sophisticated external attacks may compromise the integrity of the system and thus the voting process. In this paper, we examine the procedural components of the I-Voting system, with an emphasis on the controls related to procedural security mechanisms, and on system-transparency measures. Through an approach grounded in primary and secondary data sources, including interviews with key Estonian election personnel, we conduct an initial investigation into the extent to which the present controls mitigate the real security risks faced by the system. The experience and insight we present in this paper will be useful both in the context of the I-Voting system, and potentially more broadly in other voting systems.
The I-Voting system that was designed and implemented in Estonia in 2005 is the first Internet voting system to have been adopted anywhere in the world. Since its inception, it has been met with both praise and scrutiny. Concerns include in-person election observations, code reviews, and adversarial testing on system components. As a result of these concerns, some parties have concluded that there are various ways in which insider threats and sophisticated external attacks could compromise the system's integrity and thus the voting process. This paper examines the procedural components of the I-Voting system, with an emphasis on the controls related to procedural security mechanisms, high-level operational security aspects, and system transparency measures. The methodological approach is based on both primary and secondary data sources, including interviews with key Estonian election personnel, in order to determine the extent to which the present controls mitigate the security risks faced by the system. This study makes three main arguments. First, we found procedural controls to be fundamentally important to the design of the I-Voting system. While these mechanisms go a long way toward preventing cyberattacks, problems in the system still exist. For instance, some security situations appear to be addressed in informal ways which rely heavily on the knowledge, experience, and professional relationships between officials. Second, in terms of operational controls, we were generally impressed by the state of the controls adopted, particularly the incident handling processes during elections, as well as checks and investigations during and after elections. Our main concern regarding resilience is the increasing potential for more highly sophisticated attacks. As time progresses, attackers will naturally become stronger, and the system will have to adapt in order to accommodate this evolution. Third, the system's transparency measures have had a noteworthy impact on building confidence and trust in the I-Voting ...