WAFBOOSTER: Automatic Boosting of WAF Security Against Mutated Malicious Payloads
- URL: http://arxiv.org/abs/2501.14008v1
- Date: Thu, 23 Jan 2025 16:44:43 GMT
- Title: WAFBOOSTER: Automatic Boosting of WAF Security Against Mutated Malicious Payloads
- Authors: Cong Wu, Jing Chen, Simeng Zhu, Wenqi Feng, Ruiying Du, Yang Xiang,
- Abstract summary: Web application firewall (WAF) examines malicious traffic to and from a web application via a set of security rules.<n>As web attacks grow in sophistication, it is becoming increasingly difficult for WAFs to block the mutated malicious payloads designed to bypass their defenses.<n>We have developed a novel learning-based framework called WAFBOOSTER, designed to unveil potential bypasses in WAF detections and suggest rules to fortify their security.
- Score: 11.845356035416383
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Web application firewall (WAF) examines malicious traffic to and from a web application via a set of security rules. It plays a significant role in securing Web applications against web attacks. However, as web attacks grow in sophistication, it is becoming increasingly difficult for WAFs to block the mutated malicious payloads designed to bypass their defenses. In response to this critical security issue, we have developed a novel learning-based framework called WAFBOOSTER, designed to unveil potential bypasses in WAF detections and suggest rules to fortify their security. Using a combination of shadow models and payload generation techniques, we can identify malicious payloads and remove or modify them as needed. WAFBOOSTER generates signatures for these malicious payloads using advanced clustering and regular expression matching techniques to repair any security gaps we uncover. In our comprehensive evaluation of eight real-world WAFs, WAFBOOSTER improved the true rejection rate of mutated malicious payloads from 21% to 96%, with no false rejections. WAFBOOSTER achieves a false acceptance rate 3X lower than state-of-the-art methods for generating malicious payloads. With WAFBOOSTER, we have taken a step forward in securing web applications against the ever-evolving threats.
Related papers
- GenXSS: an AI-Driven Framework for Automated Detection of XSS Attacks in WAFs [0.0]
Cross-Site Scripting (XSS) attacks target client-side layers of web applications by injecting malicious scripts.
Traditional Web Application Firewalls (WAFs) struggle to detect highly obfuscated and complex attacks.
This paper presents a novel generative AI framework that leverages Large Language Models (LLMs) to enhance XSS mitigation.
arXiv Detail & Related papers (2025-04-11T00:13:59Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.
We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - WAFFLED: Exploiting Parsing Discrepancies to Bypass Web Application Firewalls [4.051306574166042]
Evading Web Application Firewalls (WAFs) can compromise defenses.
We present an innovative approach to bypassing WAFs by uncovering parsing discrepancies.
We identified and confirmed 1207 bypasses across 5 well-known WAFs.
arXiv Detail & Related papers (2025-03-13T19:56:29Z) - AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents [22.682464365220916]
AdvWeb is a novel black-box attack framework designed against web agents.
We train and optimize the adversarial prompter model using DPO.
Unlike prior approaches, our adversarial string injection maintains stealth and control.
arXiv Detail & Related papers (2024-10-22T20:18:26Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Mitigating Label Flipping Attacks in Malicious URL Detectors Using
Ensemble Trees [16.16333915007336]
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking.
backdoor attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa.
We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels.
arXiv Detail & Related papers (2024-03-05T14:21:57Z) - RAT: Reinforcement-Learning-Driven and Adaptive Testing for
Vulnerability Discovery in Web Application Firewalls [1.6903270584134351]
RAT clusters similar attack samples together to discover almost all bypassing attack patterns efficiently.
RAT performs 33.53% and 63.16% on average better than its counterparts in discovering the most possible bypassing payloads.
arXiv Detail & Related papers (2023-12-13T04:07:29Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - ModSec-AdvLearn: Countering Adversarial SQL Injections with Robust Machine Learning [14.392409275321528]
Core Rule Set (CRS) is a set of rules designed to detect well-known web attack patterns.<n>Manual CRS configurations yield suboptimal trade-off between detection and false alarm rates.<n>We propose using machine learning to automate the selection of the set of rules to be combined along with their weights.<n>Our approach, named ModSec-AdvLearn, can (i) increase the detection rate up to 30%, while retaining negligible false alarm rates and discarding up to 50% of the CRS rules.
arXiv Detail & Related papers (2023-08-09T13:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.