Guest editorial
In: Journal of Intellectual Capital, Band 21, Heft 2, S. 141-143
33 Ergebnisse
Sortierung:
In: Journal of Intellectual Capital, Band 21, Heft 2, S. 141-143
Non-compliance is a well-known issue in the field of cyber security. Non-compliance usually manifests in an individual's sins of omission or commission, and it is easy to conclude that the problem is attributable to their personal flawed decision making. However, the individual's decision not to comply is likely also to be influenced by a range of environmental and contextual factors. Bordieu, for example, suggests that personal habitus influences decisions. We identified a wide range of possible explanations for non-compliance from the research literature and classified these, finding that a number of the identified factors were indeed habitus related. We then used Q-methodology to determine which of these non-compliance explanations aligned with public attributions of non-compliance causatives. We discovered an "attribution gulf", with popular opinion attributing non-compliance primarily to individual failings or ignorance. The existence of this attribution gap means that those designing cyber security interventions are likely to neglect the influence of habitus on choices and decisions. We need to broaden our focus if non-compliance is to be reduced.
BASE
Cybersecurity has gained prominence, with a number of widely publicised security incidents, hacking attacks and data breaches reaching the news over the last few years. The escalation in the numbers of cyber incidents shows no sign of abating, and it seems appropriate to take a look at the way cybersecurity is conceptualised and to consider whether there is a need for a mindset change.To consider this question, we applied a "problematization" approach to assess current conceptualisations of the cybersecurity problem by government, industry and hackers. Our analysis revealed that individual human actors, in a variety of roles, are generally considered to be "a problem". We also discovered that deployed solutions primarily focus on preventing adverse events by building resistance: i.e. implementing new security layers and policies that control humans and constrain their problematic behaviours. In essence, this treats all humans in the system as if they might well be malicious actors, and the solutions are designed to prevent their ill-advised behaviours. Given the continuing incidences of data breaches and successful hacks, it seems wise to rethink the status quo approach, which we refer to as "Cybersecurity, Currently". In particular, we suggest that there is a need to reconsider the core assumptions and characterisations of the well-intentioned human's role in the cybersecurity socio-technical system. Treating everyone as a problem does not seem to work, given the current cyber security landscape.Benefiting from research in other fields, we propose a new mindset i.e. "Cybersecurity, Differently". This approach rests on recognition of the fact that the problem is actually the high complexity, interconnectedness and emergent qualities of socio-technical systems. The "differently" mindset acknowledges the well-intentioned human's ability to be an important contributor to organisational cybersecurity, as well as their potential to be "part of the solution" rather than "the problem". In essence, this new approach ...
BASE
In: Zimmermann , V & Renaud , K 2019 , ' Moving from a "human-as-problem" to a "human-as-solution" cybersecurity mindset ' , International Journal of Human Computer Studies , vol. 131 , pp. 169-187 . https://doi.org/10.1016/j.ijhcs.2019.05.005
Cybersecurity has gained prominence, with a number of widely publicised security incidents, hacking attacks and data breaches reaching the news over the last few years. The escalation in the numbers of cyber incidents shows no sign of abating, and it seems appropriate to take a look at the way cybersecurity is conceptualised and to consider whether there is a need for a mindset change. To consider this question, we applied a "problematization" approach to assess current conceptualisations of the cybersecurity problem by government, industry and hackers. Our analysis revealed that individual human actors, in a variety of roles, are generally considered to be "a problem". We also discovered that deployed solutions primarily focus on preventing adverse events by building resistance: i.e. implementing new security layers and policies that control humans and constrain their problematic behaviours. In essence, this treats all humans in the system as if they might well be malicious actors, and the solutions are designed to prevent their ill-advised behaviours. Given the continuing incidences of data breaches and successful hacks, it seems wise to rethink the status quo approach, which we refer to as "Cybersecurity, Currently". In particular, we suggest that there is a need to reconsider the core assumptions and characterisations of the well-intentioned human's role in the cybersecurity socio-technical system. Treating everyone as a problem does not seem to work, given the current cyber security landscape. Benefiting from research in other fields, we propose a new mindset i.e. "Cybersecurity, Differently". This approach rests on recognition of the fact that the problem is actually the high complexity, interconnectedness and emergent qualities of socio-technical systems. The "differently" mindset acknowledges the well-intentioned human's ability to be an important contributor to organisational cybersecurity, as well as their potential to be "part of the solution" rather than "the problem". In essence, this new approach initially treats all humans in the system as if they are well-intentioned. The focus is on enhancing factors that contribute to positive outcomes and resilience. We conclude by proposing a set of key principles and, with the help of a prototypical fictional organisation, consider how this mindset could enhance and improve cybersecurity across the socio-technical system.
BASE
In: Behavioural public policy: BPP, Band 3, Heft 1, S. 127-127
ISSN: 2398-0648
In: Behavioural public policy: BPP, Band 3, Heft 2, S. 228-258
ISSN: 2398-0648
AbstractPersuading people to choose strong passwords is challenging. One way to influence password strength, as and when people are making the choice, is to tweak the choice architecture to encourage stronger choice. A variety of choice architecture manipulations (i.e. 'nudges') have been trialled by researchers with a view to strengthening the overall password profile. None has made much of a difference so far. Here, we report on our design of an influential behavioural intervention tailored to the password choice context: a hybrid nudge that significantly prompted stronger passwords. We carried out three longitudinal studies to analyse the efficacy of a range of 'nudges' by manipulating the password choice architecture of an actual university web application. The first and second studies tested the efficacy of several simple visual framing 'nudges'. Password strength did not budge. The third study tested expiration dates directly linked to password strength. This manipulation delivered a positive result: significantly longer and stronger passwords. Our main conclusion was that the final successful nudge provided participants with absolute certainty as to the benefit of a stronger password and that it was this certainty that made the difference.
In: Innovations in teaching and learning in information and computer sciences: ITALICS, Band 12, Heft 1, S. 3-13
ISSN: 1473-7507
In: International journal of cyber warfare and terrorism: IJCWT ; an official publication of the Information Resources Management Association, Band 3, Heft 4, S. 40-51
ISSN: 1947-3443
Mobile devices have diffused through the global population with unprecedented rapidity. This diffusion has delivered great benefits to the populace at large. In the third world people living in rural areas are now able to contact family members who live in other parts of the country for the first time. For the city-dweller the mobile device revolution has brought the ability to communicate and work on the move, while they travel to and from work, or between meetings, thus making ertswhile "dead" time more productive. It is trivial, nowadays, to utilise workplace functionality, and access confidential information, outside the four walls of the organisation's traditional boundaries. Data now moves across organisational boundaries, is stored on mobile devices, on USB sticks, and in emails, and also stored in the cloud. Organisations have somehow lost control over their data. This mobility and lack of control undeniably creates the potential for information leakage that could hurt the organisation. The almost ubiquitous camera-equipped mobile phones exacerbate the problem. These feature-rich phones change the threat from mere Shoulder Surfing into Visual Information Capture. Information is now no longer merely observed or overheard but potentially captured and retained without the knowledge of the person working on said documents in public. The first step in deciding how to manage any risk is to be able to estimate the extent and nature of the risk. This paper seeks to help organisations to understand the risk related to mobile working. We will model the mobile information leakage risk, depicting the factors that play a role in exacerbating and encouraging the threat. We then report on two experiments that investigated the vulnerability of data on laptops and tablet devices to visual information capture. The authors address both capability and likelihood (probability) of such leakage. The results deliver insight into the size of the Mobile Information Leakage risk. The following stage in this research will be to find feasible ways of mitigating the risk.
In: Journal of information technology & politics: JITP, Band 6, Heft 1, S. 60-80
ISSN: 1933-169X
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Band 26, Heft 2, S. 22-31
ISSN: 0278-0097
In: The Howard journal of crime and justice, Band 62, Heft 4, S. 441-461
ISSN: 2059-1101
ABSTRACTOne of the more striking recent miscarriages of justice was perpetrated by the UK's Post Office when subpostmasters and subpostmistresses were prosecuted for fraud that actually arose from malfunctioning software. Over 700 were victimised, losing homes and livelihoods. We first use a zemiological lens to examine the harms caused by these events at both a first and second‐order range – referred to as 'ripples'. Yet, the zemiological analysis, while useful in identifying the personal harms suffered by postmasters, is less successful in associating with some of the wider costs – especially to the justice system itself. Additional tools are required for identifying how technology might be culpable in the damage that unfolded. We use a technological injustice lens to augment the zemiological analysis, to reveal how and why technology can harm, especially when appropriate checks and balances are missing, and naïve belief in the infallibility of technological solutions prevails.
It has been argued that human-centred security design needs to accommodate the considerations of three dimensions: (1) security, (2) usability and (3) accessibility. The latter has not yet received much attention. Now that governments and health services are increasingly requiring their citizens/patients to use online services, the need for accessible security and privacy has become far more pressing. The reality is that, for many, security measures are often exasperatingly inaccessible. Regardless of the outcome of the debate about the social acceptability of compelling people to access public services online, we still need to design accessibility into these systems, or risk excluding and marginalising swathes of the population who cannot use these systems in the same way as abled users. These users are particularly vulnerable to attack and online deception not only because security and privacy controls are inaccessible but also because they often struggle with depleted resources and capabilities together with less social, economic and political resilience. This conceptual paper contemplates the accessible dimension of human-centred security and its impact on the inclusivity of security technologies. We scope the range of vulnerabilities that can result from a lack of accessibility in security solutions and contemplate the nuances and complex challenges inherent in making security accessible. We conclude by suggesting a number of avenues for future work in this space.
BASE
It has been argued that human-centred security design needs to ac- commodate the considerations of three dimensions: (1) security, (2) usability and (3) accessibility. The latter has not yet received much attention. Now that governments and health services are increasingly requiring their citi- zens/patients to use online services, the need for accessible security and privacy has become far more pressing. The reality is that, for many, security measures are often exasperatingly inaccessible. Regardless of the outcome of the debate about the social acceptability of compelling people to access public services online, we still need to design accessibility into these systems, or risk excluding and marginalising swathes of the population who cannot use these systems in the same way as abled users. These users are particularly vulnerable to at- tack and online deception not only because security and privacy controls are inaccessible but also because they often struggle with depleted resources and capabilities together with less social, economic and political resilience. This conceptual paper contemplates the accessible dimension of human-centred se- curity and its impact on the inclusivity of security technologies. We scope the range of vulnerabilities that can result from a lack of accessibility in security solutions and contemplate the nuances and complex challenges inherent in making security accessible. We conclude by suggesting a number of avenues for future work in this space.
BASE
In: Journal of Intellectual Capital, Band 21, Heft 3, S. 481-505
PurposeTo investigate the links between IC and the protection of data, information and knowledge in universities, as organizations with unique knowledge-relatedfociand challenges.Design/methodology/approachThe authors gathered insights from existing IC-related research publications to delineate key foundational aspects of IC, identify and propose links to traditional information security that impact the protection of IC. They conducted interviews with key stakeholders in Australian universities in order to validate these links.FindingsThe authors' investigation revealed two kinds of embeddedness characterizing the organizational fabric of universities: (1) vertical and (2) horizontal, with an emphasis on the connection between these and IC-related knowledge protection within these institutions.Research limitations/implicationsThere is a need to acknowledge the different roles played by actors within the university and the relevance of information security to IC-related preservation.Practical implicationsFraming information security as an IC-related issue can help IT security managers communicate the need for knowledge security with executives in higher education, and secure funding to preserve and secure such IC-related knowledge, once its value is recognized.Originality/valueThis is one of the first studies to explore the connections between data and information security and the three core components of IC's knowledge security in the university context.
SSRN
Working paper