A Human Study of Cognitive Biases in Web Application Security
- URL: http://arxiv.org/abs/2505.12018v2
- Date: Fri, 30 May 2025 16:55:45 GMT
- Title: A Human Study of Cognitive Biases in Web Application Security
- Authors: Yuwei Yang, Skyler Grandel, Daniel Balasubramanian, Yu Huang, Kevin Leach,
- Abstract summary: This paper investigates how cognitive biases could be used to improve Capture the Flag education and security.<n>We present an approach to control cognitive biases, specifically Satisfaction of Search and Loss Aversion.<n>Our study reveals that many participants exhibit the Satisfaction of Search bias and that this bias has a significant effect on their success.
- Score: 5.535195078929509
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybersecurity training has become a crucial part of computer science education and industrial onboarding. Capture the Flag (CTF) competitions have emerged as a valuable, gamified approach for developing and refining the skills of cybersecurity and software engineering professionals. However, while CTFs provide a controlled environment for tackling real world challenges, the participants' decision making and problem solving processes remain under explored. Recognizing that psychology may play a role in a cyber attacker's behavior, we investigate how cognitive biases could be used to improve CTF education and security. In this paper, we present an approach to control cognitive biases, specifically Satisfaction of Search and Loss Aversion, to influence and potentially hinder attackers' effectiveness against web application vulnerabilities in a CTF style challenge. We employ a rigorous quantitative and qualitative analysis through a controlled human study of CTF tasks. CTF exercises are widely used in cybersecurity education and research to simulate real world attack scenarios and help participants develop critical skills by solving security challenges in controlled environments. In our study, participants interact with a web application containing deliberately embedded vulnerabilities while being subjected to tasks designed to trigger cognitive biases. Our study reveals that many participants exhibit the Satisfaction of Search bias and that this bias has a significant effect on their success. On average, participants found 25% fewer flags compared to those who did not exhibit this bias. Our findings provide valuable insights into how cognitive biases can be strategically employed to enhance cybersecurity outcomes, education, and measurements through the lens of CTF challenges.
Related papers
- Towards Improving IDS Using CTF Events [1.812535004393714]
This paper introduces a novel approach to evaluating Intrusion Detection Systems (IDS) through Capture the Flag (CTF) events.<n>Our research investigates the effectiveness of using tailored CTF challenges to identify weaknesses in IDS by integrating them into live CTF competitions.<n>We present a methodology that supports the development of IDS-specific challenges, a scoring system that fosters learning and engagement, and the insights of running such a challenge in a real Jeopardy-style CTF event.
arXiv Detail & Related papers (2025-01-20T19:11:30Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures [0.0]
Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity.<n>Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering.<n>Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks.
arXiv Detail & Related papers (2024-12-18T14:21:46Z) - Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies [4.606106768645647]
Ad adversarial examples (AE) pose a critical challenge to the robustness and reliability of deep learning-based systems.<n>This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications.<n>We explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques.
arXiv Detail & Related papers (2024-12-16T01:54:07Z) - Ontology-Aware RAG for Improved Question-Answering in Cybersecurity Education [13.838970688067725]
AI-driven question-answering (QA) systems can actively manage uncertainty in cybersecurity problem-solving.<n>Large language models (LLMs) have gained prominence in AI-driven QA systems, offering advanced language understanding and user engagement.<n>We propose CyberRAG, an ontology-aware retrieval-augmented generation (RAG) approach for developing a reliable and safe QA system in cybersecurity education.
arXiv Detail & Related papers (2024-12-10T21:52:35Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Navigating Cybersecurity Training: A Comprehensive Review [7.731471533663403]
This survey examines a spectrum of cybersecurity awareness training methods, analyzing traditional, technology-based, and innovative strategies.
It evaluates the principles, efficacy, and constraints of each method, presenting a comparative analysis that highlights their pros and cons.
arXiv Detail & Related papers (2024-01-20T21:14:24Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Modeling Penetration Testing with Reinforcement Learning Using
Capture-the-Flag Challenges: Trade-offs between Model-free Learning and A
Priori Knowledge [0.0]
Penetration testing is a security exercise aimed at assessing the security of a system by simulating attacks against it.
This paper focuses on simplified penetration testing problems expressed in the form of capture the flag hacking challenges.
We show how this challenge may be eased by relying on different forms of prior knowledge that may be provided to the agent.
arXiv Detail & Related papers (2020-05-26T11:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.