Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle
- URL: http://arxiv.org/abs/2507.02309v2
- Date: Tue, 15 Jul 2025 01:18:05 GMT
- Title: Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle
- Authors: Anbin Wu, Zhiyong Feng, Ruitao Feng, Zhenchang Xing, Yang Liu,
- Abstract summary: Broken Object Level Authorization (BOLA) is the top vulnerability in the API Security Top 10.<n>We propose BOLAZ, a defense framework grounded in zero trust principles.<n>We validate BOLAZ through empirical research on 10 GitHub projects.
- Score: 24.549812554065475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RESTful APIs facilitate data exchange between applications, but they also expose sensitive resources to potential exploitation. Broken Object Level Authorization (BOLA) is the top vulnerability in the OWASP API Security Top 10, exemplifies a critical access control flaw where attackers manipulate API parameters to gain unauthorized access. To address this, we propose BOLAZ, a defense framework grounded in zero trust principles. BOLAZ analyzes the data flow of resource IDs, pinpointing BOLA attack injection points and determining the associated authorization intervals to prevent horizontal privilege escalation. Our approach leverages static taint tracking to categorize APIs into producers and consumers based on how they handle resource IDs. By mapping the propagation paths of resource IDs, BOLAZ captures the context in which these IDs are produced and consumed, allowing for precise identification of authorization boundaries. Unlike defense methods based on common authorization models, BOLAZ is the first authorization-guided method that adapts defense rules based on the system's best-practice authorization logic. We validate BOLAZ through empirical research on 10 GitHub projects. The results demonstrate BOLAZ's effectiveness in defending against vulnerabilities collected from CVE and discovering 35 new BOLA vulnerabilities in the wild, demonstrating its practicality in real-world deployments.
Related papers
- BACFuzz: Exposing the Silence on Broken Access Control Vulnerabilities in Web Applications [5.424289788171823]
Broken Access Control (BAC) remains one of the most critical and widespread vulnerabilities in web applications.<n>Despite its severity, BAC is underexplored in automated testing due to key challenges.<n>We introduce BACFuzz, the first gray-box fuzzing framework specifically designed to uncover BAC vulnerabilities.
arXiv Detail & Related papers (2025-07-21T18:25:11Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Defending against Indirect Prompt Injection by Instruction Detection [81.98614607987793]
We propose a novel approach that takes external data as input and leverages the behavioral state of LLMs during both forward and backward propagation to detect potential IPI attacks.<n>Our approach achieves a detection accuracy of 99.60% in the in-domain setting and 96.90% in the out-of-domain setting, while reducing the attack success rate to just 0.12% on the BIPIA benchmark.
arXiv Detail & Related papers (2025-05-08T13:04:45Z) - Fundamental Limitations in Defending LLM Finetuning APIs [61.29028411001255]
We show that defences of fine-tuning APIs are fundamentally limited in their ability to prevent fine-tuning attacks.<n>We construct 'pointwise-undetectable' attacks that repurpose entropy in benign model outputs to covertly transmit dangerous knowledge.<n>We test our attacks against the OpenAI fine-tuning API, finding they succeed in eliciting answers to harmful multiple-choice questions.
arXiv Detail & Related papers (2025-02-20T18:45:01Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world personalized applications.<n>The valuable and often proprietary nature of the knowledge bases used in RAG introduces the risk of unauthorized usage by adversaries.<n>Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning or backdoor attacks.<n>We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks [45.65210717380502]
Large language models (LLMs) have been widely deployed as the backbone with additional tools and text information for real-world applications.
prompt injection attacks are particularly threatening, where malicious instructions injected in the external text information can exploit LLMs to generate answers as the attackers desire.
This paper introduces a novel test-time defense strategy, named AuThentication with Hash-based tags (FATH)
arXiv Detail & Related papers (2024-10-28T20:02:47Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Mining REST APIs for Potential Mass Assignment Vulnerabilities [1.0377683220196872]
We propose a lightweight approach to mine the REST API specifications and identify operations and attributes that are prone to mass assignment.
We conducted a preliminary study on 100 APIs and found 25 prone to this vulnerability.
We confirmed nine real vulnerable operations in six APIs.
arXiv Detail & Related papers (2024-05-02T09:19:32Z) - Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution [8.92716309877259]
Federated Learning (FL) and Local Differential Privacy (LDP) have attracted much attention over the past few years.<n>They share the common limitation of being vulnerable to poisoning attacks.<n>We propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution.
arXiv Detail & Related papers (2024-04-10T04:18:26Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - OpenAPI Specification Extended Security Scheme: A method to reduce the prevalence of Broken Object Level Authorization [0.0]
API Security is a topic for concern given the absence of standardized authorization in the OpenAPI standard.
This paper examines the number one vulnerability in API Security: Broken Object Level Authorization(BOLA)
arXiv Detail & Related papers (2022-12-13T14:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.