BACFuzz: Exposing the Silence on Broken Access Control Vulnerabilities in Web Applications
- URL: http://arxiv.org/abs/2507.15984v1
- Date: Mon, 21 Jul 2025 18:25:11 GMT
- Title: BACFuzz: Exposing the Silence on Broken Access Control Vulnerabilities in Web Applications
- Authors: I Putu Arya Dharmaadi, Mohannad Alhanahnah, Van-Thuan Pham, Fadi Mohsen, Fatih Turkmen,
- Abstract summary: Broken Access Control (BAC) remains one of the most critical and widespread vulnerabilities in web applications.<n>Despite its severity, BAC is underexplored in automated testing due to key challenges.<n>We introduce BACFuzz, the first gray-box fuzzing framework specifically designed to uncover BAC vulnerabilities.
- Score: 5.424289788171823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Broken Access Control (BAC) remains one of the most critical and widespread vulnerabilities in web applications, allowing attackers to access unauthorized resources or perform privileged actions. Despite its severity, BAC is underexplored in automated testing due to key challenges: the lack of reliable oracles and the difficulty of generating semantically valid attack requests. We introduce BACFuzz, the first gray-box fuzzing framework specifically designed to uncover BAC vulnerabilities, including Broken Object-Level Authorization (BOLA) and Broken Function-Level Authorization (BFLA) in PHP-based web applications. BACFuzz combines LLM-guided parameter selection with runtime feedback and SQL-based oracle checking to detect silent authorization flaws. It employs lightweight instrumentation to capture runtime information that guides test generation, and analyzes backend SQL queries to verify whether unauthorized inputs flow into protected operations. Evaluated on 20 real-world web applications, including 15 CVE cases and 2 known benchmarks, BACFuzz detects 16 of 17 known issues and uncovers 26 previously unknown BAC vulnerabilities with low false positive rates. All identified issues have been responsibly disclosed, and artifacts will be publicly released.
Related papers
- Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle [24.549812554065475]
Broken Object Level Authorization (BOLA) is the top vulnerability in the API Security Top 10.<n>We propose BOLAZ, a defense framework grounded in zero trust principles.<n>We validate BOLAZ through empirical research on 10 GitHub projects.
arXiv Detail & Related papers (2025-07-03T04:40:14Z) - Detecting and Mitigating SQL Injection Vulnerabilities in Web Applications [0.0]
The study contributes to the field by providing practical insights into effective detection and prevention strategies.<n>The study demonstrates a systematic approach to vulnerability assessment and remediation.
arXiv Detail & Related papers (2025-06-07T01:06:31Z) - CyberGym: Evaluating AI Agents' Cybersecurity Capabilities with Real-World Vulnerabilities at Scale [46.76144797837242]
Large language model (LLM) agents are becoming increasingly skilled at handling cybersecurity tasks autonomously.<n>Existing benchmarks fall short, often failing to capture real-world scenarios or being limited in scope.<n>We introduce CyberGym, a large-scale and high-quality cybersecurity evaluation framework featuring 1,507 real-world vulnerabilities.
arXiv Detail & Related papers (2025-06-03T07:35:14Z) - VPI-Bench: Visual Prompt Injection Attacks for Computer-Use Agents [74.6761188527948]
Computer-Use Agents (CUAs) with full system access pose significant security and privacy risks.<n>We investigate Visual Prompt Injection (VPI) attacks, where malicious instructions are visually embedded within rendered user interfaces.<n>Our empirical study shows that current CUAs and BUAs can be deceived at rates of up to 51% and 100%, respectively, on certain platforms.
arXiv Detail & Related papers (2025-06-03T05:21:50Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries [11.331334831883058]
Open-source AI libraries are foundational to modern AI systems, yet they present significant, underexamined risks spanning security, licensing, maintenance, supply chain integrity, and regulatory compliance.<n>We introduce LibVulnWatch, a system that leverages recent advances in large language models and agentic to perform deep, evidence-based evaluations of these libraries.
arXiv Detail & Related papers (2025-05-13T12:58:11Z) - Automated Static Vulnerability Detection via a Holistic Neuro-symbolic Approach [17.872674648772616]
We present MoCQ, a novel neuro-symbolic framework that combines the complementary strengths of Large Language Model (LLM) and classic vulnerability checkers.<n>MoCQ achieves comparable precision and recall compared to expert-developed queries, with significantly less expert time needed.<n>MoCQ also uncovered 46 new vulnerability patterns that experts missed, each representing an overlooked vulnerability class.
arXiv Detail & Related papers (2025-04-22T17:33:53Z) - Improving the Context Length and Efficiency of Code Retrieval for Tracing Security Vulnerability Fixes [7.512949497610182]
Existing approaches to trace/retrieve the patching commit for fixing a CVE suffer from two major challenges.<n>We propose SITPatchTracer, a scalable and effective retrieval system for tracing known vulnerability patches.<n>Using SITPatchTracer, we have successfully traced and merged the patch links for 35 new CVEs in the GitHub Advisory database.
arXiv Detail & Related papers (2025-03-29T01:53:07Z) - RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage [13.03711739119631]
Existing defenses (OpenAI GPTs) require user confirmation before every tool call.<n>We introduce Robust TBAS (RTBAS), which automatically detects and executes tool calls that preserve integrity and confidentiality.<n>We present two novel dependency screeners, using LM-as-a-judge and attention-based saliency, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T05:06:22Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world personalized applications.<n>The valuable and often proprietary nature of the knowledge bases used in RAG introduces the risk of unauthorized usage by adversaries.<n>Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning or backdoor attacks.<n>We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks [45.65210717380502]
Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications.
prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire.
This paper introduces a novel test-time defense strategy, named AuThentication with Hash-based tags (FATH)
arXiv Detail & Related papers (2024-10-28T20:02:47Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.<n>Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.<n>We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Demystifying RCE Vulnerabilities in LLM-Integrated Apps [20.01949990700702]
Frameworks like LangChain aid LLM-integrated app development, offering code execution utility/APIs for custom actions.<n>These capabilities theoretically introduce Remote Code Execution (RCE) vulnerabilities, enabling remote code execution through prompt injections.<n>No prior research systematically investigates these frameworks' RCE vulnerabilities or their impact on applications and exploitation consequences.
arXiv Detail & Related papers (2023-09-06T11:39:37Z) - On the Security Vulnerabilities of Text-to-SQL Models [34.749129843281196]
We show that modules within six commercial applications can be manipulated to produce malicious code.
This is the first demonstration that NLP models can be exploited as attack vectors in the wild.
The aim of this work is to draw the community's attention to potential software security issues associated with NLP algorithms.
arXiv Detail & Related papers (2022-11-28T14:38:45Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.