WAFFLED: Exploiting Parsing Discrepancies to Bypass Web Application Firewalls
- URL: http://arxiv.org/abs/2503.10846v2
- Date: Tue, 29 Apr 2025 15:17:46 GMT
- Title: WAFFLED: Exploiting Parsing Discrepancies to Bypass Web Application Firewalls
- Authors: Seyed Ali Akhavani, Bahruz Jabiyev, Ben Kallus, Cem Topcuoglu, Sergey Bratus, Engin Kirda,
- Abstract summary: Evading Web Application Firewalls (WAFs) can compromise defenses.<n>We present an innovative approach to bypassing WAFs by uncovering parsing discrepancies.<n>We identified and confirmed 1207 bypasses across 5 well-known WAFs.
- Score: 4.051306574166042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Web Application Firewalls (WAFs) have been introduced as essential and popular security gates that inspect incoming HTTP traffic to filter out malicious requests and provide defenses against a diverse array of web-based threats. Evading WAFs can compromise these defenses, potentially harming Internet users. In recent years, parsing discrepancies have plagued many entities in the communication path; however, their potential impact on WAF evasion and request smuggling remains largely unexplored. In this work, we present an innovative approach to bypassing WAFs by uncovering and exploiting parsing discrepancies through advanced fuzzing techniques. By targeting non-malicious components such as headers and segments of the body and using widely used content-types such as application/json, multipart/form-data, and application/xml, we identified and confirmed 1207 bypasses across 5 well-known WAFs, AWS, Azure, Cloud Armor, Cloudflare, and ModSecurity. To validate our findings, we conducted a study in the wild, revealing that more than 90% of websites accepted both application/x-www-form-urlencoded and multipart/form-data interchangeably, highlighting a significant vulnerability and the broad applicability of our bypass techniques. We have reported these vulnerabilities to the affected parties and received acknowledgments from all, as well as bug bounty rewards from some vendors. Further, to mitigate these vulnerabilities, we introduce HTTP-Normalizer, a robust proxy tool designed to rigorously validate HTTP requests against current RFC standards. Our results demonstrate its effectiveness in normalizing or blocking all bypass attempts presented in this work.
Related papers
- WAFBOOSTER: Automatic Boosting of WAF Security Against Mutated Malicious Payloads [11.845356035416383]
Web application firewall (WAF) examines malicious traffic to and from a web application via a set of security rules.<n>As web attacks grow in sophistication, it is becoming increasingly difficult for WAFs to block the mutated malicious payloads designed to bypass their defenses.<n>We have developed a novel learning-based framework called WAFBOOSTER, designed to unveil potential bypasses in WAF detections and suggest rules to fortify their security.
arXiv Detail & Related papers (2025-01-23T16:44:43Z) - Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense [55.77152277982117]
We introduce Layer-AdvPatcher, a methodology designed to defend against jailbreak attacks.
We use an unlearning strategy to patch specific layers within large language models through self-augmented datasets.
Our framework reduces the harmfulness and attack success rate of jailbreak attacks.
arXiv Detail & Related papers (2025-01-05T19:06:03Z) - Capturing the security expert knowledge in feature selection for web application attack detection [0.0]
The goal is to enhance the effectiveness of web application firewalls (WAFs)
The problem is addressed as an approach that combines supervised learning for feature selection with a semi-supervised learning scenario for training a One-Class SVM model.
The experimental findings show that the model trained with features selected by the proposed algorithm outperformed the expert-based selection approach in terms of performance.
arXiv Detail & Related papers (2024-07-26T00:56:11Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Exposing and Addressing Security Vulnerabilities in Browser Text Input
Fields [22.717150034358948]
We perform a comprehensive analysis of the security of text input fields in web browsers.
We find that browsers' coarse-grained permission model violates two security design principles.
We uncover two vulnerabilities in input fields, including the alarming discovery of passwords in plaintext.
arXiv Detail & Related papers (2023-08-30T21:02:48Z) - ModSec-AdvLearn: Countering Adversarial SQL Injections with Robust Machine Learning [14.392409275321528]
Core Rule Set (CRS) is a set of rules designed to detect well-known web attack patterns.<n>Manual CRS configurations yield suboptimal trade-off between detection and false alarm rates.<n>We propose using machine learning to automate the selection of the set of rules to be combined along with their weights.<n>Our approach, named ModSec-AdvLearn, can (i) increase the detection rate up to 30%, while retaining negligible false alarm rates and discarding up to 50% of the CRS rules.
arXiv Detail & Related papers (2023-08-09T13:58:03Z) - A Novel Approach To User Agent String Parsing For Vulnerability Analysis
Using Mutli-Headed Attention [3.3029515721630855]
A novel methodology for parsing UASs using Multi-Headed Attention Based transformers is proposed.
The proposed methodology exhibits strong performance in parsing a variety of UASs with differing formats.
A framework to utilize parsed UASs to estimate the vulnerability scores for large sections of publicly visible IT networks or regions is also discussed.
arXiv Detail & Related papers (2023-06-06T14:49:25Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - EDEFuzz: A Web API Fuzzer for Excessive Data Exposures [3.5061201620029885]
Excessive Data Exposure (EDE) was the third most significant API vulnerability of 2019.
There are few automated tools -- either in research or industry -- to effectively find and remediate such issues.
We build the first fuzzing tool -- that we call EDEFuzz -- to systematically detect EDEs.
arXiv Detail & Related papers (2023-01-23T04:05:08Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.