Under and over the surface: a comparison of the use of leaked account credentials in the Dark and Surface Web
In: Crime Science, Volume 7, Issue 1
ISSN: 2193-7680
9 results
Sort by:
In: Crime Science, Volume 7, Issue 1
ISSN: 2193-7680
Web archiving services play an increasingly important role in today-s information ecosystem, by ensuring the continuing availability of information, or by deliberately caching content that might get deleted or removed. Among these, the Wayback Machine has been proactively archiving, since 2001, versions of a large number of Web pages, while newer services like archive.is allow users to create on-demand snapshots of specific Web pages, which serve as time capsules that can be shared across the Web. In this paper, we present a large-scale analysis of Web archiving services and their use on social media, shedding light on the actors involved in this ecosystem, the content that gets archived, and how it is shared. We crawl and study: 1) 21M URLs from archive.is, spanning almost two years; and 2) 356K archive.is plus 391K Wayback Machine URLs that were shared on four social networks: Reddit, Twitter, Gab, and 4chan-s Politically Incorrect board (/pol/) over 14 months. We observe that news and social media posts are the most common types of content archived, likely due to their perceived ephemeral and/or controversial nature. Moreover, URLs of archiving services are extensively shared on "fringe" communities within Reddit and 4chan to preserve possibly contentious content. Lastly, we find evidence of moderators nudging or even forcing users to use archives, instead of direct links, for news sources with opposing ideologies, potentially depriving them of ad revenue.
BASE
To appear at the 16th International Conference on Web and Social Media (ICWSM 2022). This project has received funding from the European Union's Horizon 2020 Research and Innovation program under the CONCORDIA project (Grant Agreement No. 830927), and from the Innovation and Networks Executive Agency (INEA) under the CYberSafety II project (Grant Agreement No. 1614254). ; The role played by YouTube's recommendation algorithm in unwittingly promoting misinformation and conspiracy theories is not entirely understood. Yet, this can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, such as the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTube. We collect 6.6K videos related to COVID-19, the Flat Earth theory, as well as the anti-vaccination and anti-mask movements. Using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant and train a deep learning classifier to detect pseudoscientific videos with an accuracy of 0.79. We quantify user exposure to this content on various parts of the platform and how this exposure changes based on the user's watch history. We find that YouTube suggests more pseudoscientific content regarding traditional pseudoscientific topics (e.g., flat earth, anti-vaccination) than for emerging ones (like COVID-19). At the same time, these recommendations are more common on the search results page than on a user's homepage or in the recommendation section when actively watching videos. Finally, we shed light on how a user's watch history substantially affects the type of recommended videos.
BASE
To appear at the 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2021).This project has received funding from the European Union's Horizon 2020 Research and Innovation program under the Marie Skłdowska-Curie ENCASE project (GA No. 691025) and the CONCORDIA project (GA No. 830927), the US National Science Foundation (grants: 1942610, 2114407, 2114411, and 2046590), and the UK's National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (UKRI grant: EP/V011189/1). ; YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content.
BASE
Over the past couple of years, anecdotal evidence has emerged linking coordinated campaigns by state-sponsored actors with efforts to manipulate public opinion on the Web, often around major political events, through dedicated accounts, or "trolls." Although they are often involved in spreading disinformation on social media, there is little understanding of how these trolls operate, what type of content they disseminate, and most importantly their influence on the information ecosystem. In this paper, we shed light on these questions by analyzing 27K tweets posted by 1K Twitter users identified as having ties with Russia's Internet Research Agency and thus likely state-sponsored trolls. We compare their behavior to a random set of Twitter users, finding interesting differences in terms of the content they disseminate, the evolution of their account, as well as their general behavior and use of Twitter. Then, using Hawkes Processes, we quantify the influence that trolls had on the dissemination of news on social platforms like Twitter, Reddit, and 4chan. Overall, our findings indicate that Russian trolls managed to stay active for long periods of time and to reach a substantial number of Twitter users with their tweets. When looking at their ability of spreading news content and making it viral, however, we find that their effect on social platforms was minor, with the significant exception of news published by the Russian state-sponsored news outlet RT (Russia Today).
BASE
Over the past few years, a number of new "fringe" communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web's information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting "people and free speech first'", welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan's Politically Incorrect board.
BASE
Over the past few years, a number of new "fringe" communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web's information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting "people and free speech first", welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan's Politically Incorrect board.
BASE
In: ACM Conference on Human Factors in Computing Systems (CHI 2021)/ Artificially Intelligent Technology for the Margins: A Multidisciplinary Design Agenda Workshop
SSRN
We set out to investigate how customers comprehend bank terms and conditions (T&Cs). If T&Cs are incomprehensible, then it is unreasonable to expect customers to comply with them. An expert analysis of 30 bank contracts across 25 countries found that in most cases the contract terms were too vague to be understood; in some cases they differ by product type, and advice can even be contradictory. While many banks allow customers to write PINs down as long as they are disguised and not kept with the card, 20% of banks do not allow PINs to be written down at all, and a handful do not allow PINs to be shared between accounts. We test our findings on 151 participants in Germany, the US and UK. They mostly agree: only 35% fully understand the T&Cs, and 28% find that sections are unclear. There are strong regional variations: Germans find their T&Cs particularly hard to understand, but Americans assume harsher T&Cs than they actually are, and tend to be reassured when they actually read them. ; This work was supported by The Royal Society [grant number UF110392] to SJM; the Engineering and Physical Sciences Research Council [grant number EP/G037264/1] to IB; the Department of Homeland Security (DHS) Science and Technology Directorate, Cyber Security Division (DHSS T/CSD) Broad Agency Announcement 11.02, the Government of Australia and SPAWAR Systems Center Pacific [contract number N66001-13-C-0131] to AH.
BASE