What breach? Measuring online awareness of security incidents by
studying real-world browsing behavior
- URL: http://arxiv.org/abs/2010.09843v4
- Date: Thu, 27 May 2021 18:36:25 GMT
- Title: What breach? Measuring online awareness of security incidents by
studying real-world browsing behavior
- Authors: Sruti Bhagavatula, Lujo Bauer, Apu Kapadia
- Abstract summary: This paper examines how often people read about security incidents online.
We find that only 16% of participants visited any web pages related to six widely publicized large-scale security incidents.
More severe incidents as well as articles that constructively spoke about the incident inspired more action.
- Score: 9.750563575752956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Awareness about security and privacy risks is important for developing good
security habits. Learning about real-world security incidents and data breaches
can alert people to the ways in which their information is vulnerable online,
thus playing a significant role in encouraging safe security behavior. This
paper examines 1) how often people read about security incidents online, 2) of
those people, whether and to what extent they follow up with an action, e.g.,
by trying to read more about the incident, and 3) what influences the
likelihood that they will read about an incident and take some action. We study
this by quantitatively examining real-world internet-browsing data from 303
participants.
Our findings present a bleak view of awareness of security incidents. Only
16% of participants visited any web pages related to six widely publicized
large-scale security incidents; few read about one even when an incident was
likely to have affected them (e.g., the Equifax breach almost universally
affected people with Equifax credit reports). We further found that more severe
incidents as well as articles that constructively spoke about the incident
inspired more action. We conclude with recommendations for specific future
research and for enabling useful security incident information to reach more
people.
Related papers
- On the Role of Attention Heads in Large Language Model Safety [64.51534137177491]
Large language models (LLMs) achieve state-of-the-art performance on multiple language tasks, yet their safety guardrails can be circumvented.
We propose a novel metric which tailored for multi-head attention, the Safety Head ImPortant Score (Ships) to assess the individual heads' contributions to model safety.
arXiv Detail & Related papers (2024-10-17T16:08:06Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - It's more than just money: The real-world harms from ransomware attacks [1.5391321019692432]
This article conducts a novel exploration into the multitude of real-world harms that can arise from cyber-attacks.
We draw on publicly-available case data on high-profile ransomware incidents to examine the types of harm that emerge at various stages after a ransomware attack.
arXiv Detail & Related papers (2023-07-06T08:46:16Z) - Global Pandemics Influence on Cyber Security and Cyber Crimes [5.8010446129208155]
COVID-19 has caused widespread damage across many areas of life and has made humans more dependent on the internet and technology.
This paper examines the different types of security threats and cyber crimes that people faced in the pandemic time and the need for a safe and secure cyber infrastructure.
arXiv Detail & Related papers (2023-02-24T05:26:42Z) - Don't be a Victim During a Pandemic! Analysing Security and Privacy
Threats in Twitter During COVID-19 [2.43420394129881]
This paper performs a large-scale study to investigate the impact of a pandemic and the lockdown periods on the security and privacy of social media users.
We analyse 10.6 Million COVID-related tweets from 533 days of data crawling.
arXiv Detail & Related papers (2022-02-21T21:52:37Z) - Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats [1.2387676601792899]
We propose novel approaches for detecting misinformation about cybersecurity and privacy threats on social media.
We developed a framework for detecting inaccurate phishing claims on Twitter.
We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms.
arXiv Detail & Related papers (2021-10-23T20:45:24Z) - Protect Against Unintentional Insider Threats: The risk of an employee's
cyber misconduct on a Social Media Site [3.2548794659022393]
This research project aims to collect and analyse open-source data from LinkedIn.
The final aim of the study is to understand if there are behavioral factors that can predicting one's attitude toward disclosing sensitive data.
arXiv Detail & Related papers (2021-03-08T13:30:01Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.