When the Lights Went Out (Nye, D.; 2010) [Book Review]
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Band 32, Heft 2, S. 7-8
ISSN: 0278-0097
13 Ergebnisse
Sortierung:
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Band 32, Heft 2, S. 7-8
ISSN: 0278-0097
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Band 30, Heft 3, S. 7-8
ISSN: 0278-0097
In: Risk analysis: an international journal, Band 38, Heft 4, S. 826-838
ISSN: 1539-6924
AbstractPhishing risk is a growing area of concern for corporations, governments, and individuals. Given the evidence that users vary widely in their vulnerability to phishing attacks, we demonstrate an approach for assessing the benefits and costs of interventions that target the most vulnerable users. Our approach uses Monte Carlo simulation to (1) identify which users were most vulnerable, in signal detection theory terms; (2) assess the proportion of system‐level risk attributable to the most vulnerable users; (3) estimate the monetary benefit and cost of behavioral interventions targeting different vulnerability levels; and (4) evaluate the sensitivity of these results to whether the attacks involve random or spear phishing. Using parameter estimates from previous research, we find that the most vulnerable users were less cautious and less able to distinguish between phishing and legitimate emails (positive response bias and low sensitivity, in signal detection theory terms). They also accounted for a large share of phishing risk for both random and spear phishing attacks. Under these conditions, our analysis estimates much greater net benefit for behavioral interventions that target these vulnerable users. Within the range of the model's assumptions, there was generally net benefit even for the least vulnerable users. However, the differences in the return on investment for interventions with users with different degrees of vulnerability indicate the importance of measuring that performance, and letting it guide interventions. This study suggests that interventions to reduce response bias, rather than to increase sensitivity, have greater net benefit.
In: JEPO-D-22-00565
SSRN
In: Government information quarterly: an international journal of policies, resources, services and practices, Band 39, Heft 4, S. 101752
ISSN: 0740-624X
In: Human factors: the journal of the Human Factors Society, Band 58, Heft 8, S. 1158-1172
ISSN: 1547-8181
Objective:We use signal detection theory to measure vulnerability to phishing attacks, including variation in performance across task conditions.Background:Phishing attacks are difficult to prevent with technology alone, as long as technology is operated by people. Those responsible for managing security risks must understand user decision making in order to create and evaluate potential solutions.Method:Using a scenario-based online task, we performed two experiments comparing performance on two tasks: detection, deciding whether an e-mail is phishing, and behavior, deciding what to do with an e-mail. In Experiment 1, we manipulated the order of the tasks and notification of the phishing base rate. In Experiment 2, we varied which task participants performed.Results:In both experiments, despite exhibiting cautious behavior, participants' limited detection ability left them vulnerable to phishing attacks. Greater sensitivity was positively correlated with confidence. Greater willingness to treat e-mails as legitimate was negatively correlated with perceived consequences from their actions and positively correlated with confidence. These patterns were robust across experimental conditions.Conclusion:Phishing-related decisions are sensitive to individuals' detection ability, response bias, confidence, and perception of consequences. Performance differs when people evaluate messages or respond to them but not when their task varies in other ways.Application:Based on these results, potential interventions include providing users with feedback on their abilities and information about the consequences of phishing, perhaps targeting those with the worst performance. Signal detection methods offer system operators quantitative assessments of the impacts of interventions and their residual vulnerability.
In: Human factors: the journal of the Human Factors Society, Band 66, Heft 10, S. 2323-2344
ISSN: 1547-8181
Objective We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning. Background Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids. Method Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots. Results The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it. Conclusions Informative interventions improved social bot detection; warning alone did not. Application We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
In: Environmental management: an international journal for decision makers, scientists, and environmental auditors, Band 72, Heft 4, S. 771-784
ISSN: 1432-1009
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 20, Heft 9, S. 1132-1153
ISSN: 1466-4461
In: Journal of risk research: the official journal of the Society for Risk Analysis Europe and the Society for Risk Analysis Japan, Band 26, Heft 10, S. 1137-1152
ISSN: 1466-4461
In: Human factors: the journal of the Human Factors Society, Band 66, Heft 2, S. 348-362
ISSN: 1547-8181
Objective This study manipulates the presence and reliability of AI recommendations for risky decisions to measure the effect on task performance, behavioral consequences of trust, and deviation from a probability matching collaborative decision-making model. Background Although AI decision support improves performance, people tend to underutilize AI recommendations, particularly when outcomes are uncertain. As AI reliability increases, task performance improves, largely due to higher rates of compliance (following action recommendations) and reliance (following no-action recommendations). Methods In a between-subject design, participants were assigned to a high reliability AI, low reliability AI, or a control condition. Participants decided whether to bet that their team would win in a series of basketball games tying compensation to performance. We evaluated task performance (in accuracy and signal detection terms) and the behavioral consequences of trust (via compliance and reliance). Results AI recommendations improved task performance, had limited impact on risk-taking behavior, and were under-valued by participants. Accuracy, sensitivity ( d'), and reliance increased in the high reliability AI condition, but there was no effect on response bias ( c) or compliance. Participant behavior was only consistent with a probability matching model for compliance in the low reliability condition. Conclusion In a pay-off structure that incentivized risk-taking, the primary value of the AI recommendations was in determining when to perform no action (i.e., pass on bets). Application In risky contexts, designers need to consider whether action or no-action recommendations will be more influential to design appropriate interventions.
In: Human factors: the journal of the Human Factors Society, Band 66, Heft 1, S. 88-102
ISSN: 1547-8181
Objective We examine individuals' ability to detect social bots among Twitter personas, along with participant and persona features associated with that ability. Background Social media users need to distinguish bots from human users. We develop and demonstrate a methodology for assessing those abilities, with a simulated social media task. Method We analyze performance from a signal detection theory perspective, using a task that asked lay participants whether each of 50 Twitter personas was a human or social bot. We used the agreement of two machine learning models to estimate the probability of each persona being a bot. We estimated the probability of participants indicating that a persona was a bot with a generalized linear mixed-effects model using participant characteristics (social media experience, analytical reasoning, and political views) and stimulus characteristics (bot indicator score and political tone) as regressors. Results On average, participants had modest sensitivity (d') and a criterion that favored responding "human." Exploratory analyses found greater sensitivity for participants (a) with less self-reported social media experience, (b) greater analytical reasoning ability, and (c) who were evaluating personas with opposing political views. Some patterns varied with participants' political identity. Conclusions Individuals have limited ability to detect social bots, with greater aversion to mistaking bots for humans than vice versa. Greater social media experience and myside bias appeared to reduce performance, as did less analytical reasoning ability. Application These patterns suggest the need for interventions, especially when users feel most familiar with social media.
The University of Texas at Austin's Policy Research Project (PRP), a nine-month (two semesters) capstone, is a keystone of the core curriculum at the LBJ School of Public Affairs. In PRPs, small groups of students, under the mentorship of a faculty director, take on real-world problems that require special knowledge and skill sets. PRPs expose students to challenges in formulating and executing research, and in communicating academic research and related complex data to broader stakeholder communities and decision makers. The PRP structure is an innovative and effective approach for integrating research within the teaching and training of graduate students who are preparing themselves to address important real-world problems at the intersection of society, economics, technology, and policy. The project summaries below describe seven papers developed during September 2017 – May 2018 as part of a PRP on "Diffusion of Innovations: Interplay of Social, Economic, Technological, and Policy Drivers in the Solar Industry." Twenty graduate students, drawn from the LBJ School's Masters in Public Affairs and Masters in Global Policy Studies programs and the Jackson School Geoscience's Energy and Earth Resources program, participated in this PRP. Dr. Varun Rai, Associate Professor and Associate Dean for Research at the LBJ School, directed the PRP, with support from his research team including: Dr. Ariane Beck, Dr. Ashok Sekar, D. Cale Reeves, and Erik Funkhouser. Clients for the project included the U.S. Department of Energy (Casey Canfield), Lawrence Berkeley National Laboratory (Ben Hoen, Galen Barbose Joachim Seel, Naïm Darghouth, Ryan Wiser), and National Renewable Energy Laboratory (Benjamin Sigrin, Eric O'Shaughnessy). The seven projects separately addressed one of the following topics: (1) low- and middle-income PV adoption, (2) modeling economic and information intervention design, (3) evaluation of DOE's Solar in Your Community Challenge, (4) property value impacts near large-scale solar facilities, (5) solar market maturity and evolution of business models, (6) social media data for predicting PV adoption, and (7) individual-level variation in adoption of innovations. Many of the papers relied on data collected and curated by Lawrence Berkeley National Laboratory, including data embedded within the annual Tracking the Sun and Utility-Scale Solar reports. Each of the seven teams in the PRP prepared a research paper. The PRP culminated with a full-day conference at UT Austin in May 2018 to present findings from the seven projects in this PRP to a broad audience of about 75 experts from academia, national labs, industry, and government from across the country.
BASE