AdvSQLi: Generating Adversarial SQL Injections against Real-world WAF-as-a-service
- URL: http://arxiv.org/abs/2401.02615v3
- Date: Tue, 9 Jan 2024 08:10:10 GMT
- Title: AdvSQLi: Generating Adversarial SQL Injections against Real-world WAF-as-a-service
- Authors: Zhenqing Qu, Xiang Ling, Ting Wang, Xiang Chen, Shouling Ji, Chunming Wu,
- Abstract summary: With the development of cloud computing, WAF-as-a-service has been proposed to facilitate the deployment, configuration, and update of WAFs in the cloud.
Despite its tremendous popularity, the security vulnerabilities of WAF-as-a-service are still largely unknown.
With Advi, we make it feasible to inspect and understand the security vulnerabilities of WAFs automatically, helping vendors make products more secure.
- Score: 41.557003808027204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the first defensive layer that attacks would hit, the web application firewall (WAF) plays an indispensable role in defending against malicious web attacks like SQL injection (SQLi). With the development of cloud computing, WAF-as-a-service, as one kind of Security-as-a-service, has been proposed to facilitate the deployment, configuration, and update of WAFs in the cloud. Despite its tremendous popularity, the security vulnerabilities of WAF-as-a-service are still largely unknown, which is highly concerning given its massive usage. In this paper, we propose a general and extendable attack framework, namely AdvSQLi, in which a minimal series of transformations are performed on the hierarchical tree representation of the original SQLi payload, such that the generated SQLi payloads can not only bypass WAF-as-a-service under black-box settings but also keep the same functionality and maliciousness as the original payload. With AdvSQLi, we make it feasible to inspect and understand the security vulnerabilities of WAFs automatically, helping vendors make products more secure. To evaluate the attack effectiveness and efficiency of AdvSQLi, we first employ two public datasets to generate adversarial SQLi payloads, leading to a maximum attack success rate of 100% against state-of-the-art ML-based SQLi detectors. Furthermore, to demonstrate the immediate security threats caused by AdvSQLi, we evaluate the attack effectiveness against 7 WAF-as-a-service solutions from mainstream vendors and find all of them are vulnerable to AdvSQLi. For instance, AdvSQLi achieves an attack success rate of over 79% against the F5 WAF. Through in-depth analysis of the evaluation results, we further condense out several general yet severe flaws of these vendors that cannot be easily patched.
Related papers
- Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.
We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via Backdoor Attack [23.403724263002008]
Security concerns remain largely unexplored, particularly the threat of backdoor attacks.
We present Toxic, a novel backdoor attack framework.
We demonstrate that injecting only 0.44% of poisoned data can result in an attack success rate of 79.41%, posing a significant risk to database security.
arXiv Detail & Related papers (2025-03-07T14:16:48Z) - WAFBOOSTER: Automatic Boosting of WAF Security Against Mutated Malicious Payloads [11.845356035416383]
Web application firewall (WAF) examines malicious traffic to and from a web application via a set of security rules.
As web attacks grow in sophistication, it is becoming increasingly difficult for WAFs to block the mutated malicious payloads designed to bypass their defenses.
We have developed a novel learning-based framework called WAFBOOSTER, designed to unveil potential bypasses in WAF detections and suggest rules to fortify their security.
arXiv Detail & Related papers (2025-01-23T16:44:43Z) - AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents [22.682464365220916]
AdvWeb is a novel black-box attack framework designed against web agents.
We train and optimize the adversarial prompter model using DPO.
Unlike prior approaches, our adversarial string injection maintains stealth and control.
arXiv Detail & Related papers (2024-10-22T20:18:26Z) - SecAlign: Defending Against Prompt Injection with Preference Optimization [52.48001255555192]
Adrial prompts can be injected into external data sources to override the system's intended instruction and execute a malicious instruction.
We propose a new defense called SecAlign based on the technique of preference optimization.
Our method reduces the success rates of various prompt injections to around 0%, even against attacks much more sophisticated than ones seen during training.
arXiv Detail & Related papers (2024-10-07T19:34:35Z) - Microarchitectural Security of AWS Firecracker VMM for Serverless Cloud Platforms [9.345368209757495]
Firecracker is a virtual machine manager built by Amazon Web Services (AWS) for serverless cloud platforms.
We show that AWS overstates the security inherent to the Firecracker VMM and provides incomplete guidance for properly securing cloud systems that use Firecracker.
arXiv Detail & Related papers (2023-11-27T16:46:03Z) - Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game [86.66627242073724]
This paper presents a dataset of over 126,000 prompt injection attacks and 46,000 prompt-based "defenses" against prompt injection.
To the best of our knowledge, this is currently the largest dataset of human-generated adversarial examples for instruction-following LLMs.
We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as prompt extraction and prompt hijacking.
arXiv Detail & Related papers (2023-11-02T06:13:36Z) - Adversarial ModSecurity: Countering Adversarial SQL Injections with
Robust Machine Learning [16.09513503181256]
ModSecurity is widely recognized as the standard open-source Web Application Firewall (WAF)
We develop a robust machine learning model, named AdvModSec, which uses the Core Rule Set ( CRS) rules as input features.
Our experiments show that AdvModSec, being trained on the traffic directed towards the protected web services, achieves a better trade-off between detection and false positive rates.
arXiv Detail & Related papers (2023-08-09T13:58:03Z) - From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application? [4.361862281841999]
We present a comprehensive examination of P$$ injections targeting web applications based on the Langchain framework.
Our findings indicate that LLM-integrated applications based on Langchain are highly susceptible to P$$ injection attacks, warranting the adoption of robust defenses.
We propose four effective defense techniques that can be integrated as extensions to the Langchain framework.
arXiv Detail & Related papers (2023-08-03T19:03:18Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - On the Security Vulnerabilities of Text-to-SQL Models [34.749129843281196]
We show that modules within six commercial applications can be manipulated to produce malicious code.
This is the first demonstration that NLP models can be exploited as attack vectors in the wild.
The aim of this work is to draw the community's attention to potential software security issues associated with NLP algorithms.
arXiv Detail & Related papers (2022-11-28T14:38:45Z) - Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification [71.80885227961015]
Person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks.
We propose a novel backdoor attack on ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA)
We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
arXiv Detail & Related papers (2022-11-20T10:08:28Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing
Website Classifiers [12.760638960844249]
We show that evasion attacks can be launched on ML-based anti-phishing classifiers even in the grey-, and black-box scenarios.
We propose three mutation-based attacks, differing in the knowledge of the target classifier, addressing a key technical challenge.
We demonstrate the effectiveness and efficiency of our evasion attacks on the state-of-the-art, Google's phishing page filter, achieved 100% attack success rate in less than one second per website.
arXiv Detail & Related papers (2020-04-15T09:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.