Evolutionary Multi-Task Injection Testing on Web Application Firewalls
- URL: http://arxiv.org/abs/2206.05743v1
- Date: Sun, 12 Jun 2022 14:11:55 GMT
- Title: Evolutionary Multi-Task Injection Testing on Web Application Firewalls
- Authors: Ke Li, Heng Yang, Willem Visser
- Abstract summary: DaNuoYi is an automatic injection testing tool that simultaneously generates test inputs for multiple types of injection attacks on a WAF.
We conduct experiments on three real-world open-source WAFs and six types of injection attacks.
DaNuoYi generates up to 3.8x and 5.78x more valid test inputs (i.e., bypassing the underlying WAF) than its state-of-the-art single-task counterparts.
- Score: 11.037455973709532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Web application firewall (WAF) plays an integral role nowadays to protect web
applications from various malicious injection attacks such as SQL injection,
XML injection, and PHP injection, to name a few. However, given the evolving
sophistication of injection attacks and the increasing complexity of tuning a
WAF, it is challenging to ensure that the WAF is free of injection
vulnerabilities such that it will block all malicious injection attacks without
wrongly affecting the legitimate message. Automatically testing the WAF is,
therefore, a timely and essential task. In this paper, we propose DaNuoYi, an
automatic injection testing tool that simultaneously generates test inputs for
multiple types of injection attacks on a WAF. Our basic idea derives from the
cross-lingual translation in the natural language processing domain. In
particular, test inputs for different types of injection attacks are
syntactically different but may be semantically similar. Sharing semantic
knowledge across multiple programming languages can thus stimulate the
generation of more sophisticated test inputs and discovering injection
vulnerabilities of the WAF that are otherwise difficult to find. To this end,
in DaNuoYi, we train several injection translation models by using multi-task
learning that translates the test inputs between any pair of injection attacks.
The model is then used by a novel multi-task evolutionary algorithm to
co-evolve test inputs for different types of injection attacks facilitated by a
shared mating pool and domain-specific mutation operators at each generation.
We conduct experiments on three real-world open-source WAFs and six types of
injection attacks, the results reveal that DaNuoYi generates up to 3.8x and
5.78x more valid test inputs (i.e., bypassing the underlying WAF) than its
state-of-the-art single-task counterparts and the context-free grammar-based
injection construction.
Related papers
- DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks [101.52204404377039]
LLM-integrated applications and agents are vulnerable to prompt injection attacks.
A detection method aims to determine whether a given input is contaminated by an injected prompt.
We propose DataSentinel, a game-theoretic method to detect prompt injection attacks.
arXiv Detail & Related papers (2025-04-15T16:26:21Z) - Can Indirect Prompt Injection Attacks Be Detected and Removed? [68.6543680065379]
We investigate the feasibility of detecting and removing indirect prompt injection attacks.
For detection, we assess the performance of existing LLMs and open-source detection models.
For removal, we evaluate two intuitive methods: (1) the segmentation removal method, which segments the injected document and removes parts containing injected instructions, and (2) the extraction removal method, which trains an extraction model to identify and remove injected instructions.
arXiv Detail & Related papers (2025-02-23T14:02:16Z) - Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models [19.46962670935554]
Diffusion models are vulnerable to backdoor attacks.
We propose a black-box input-level backdoor detection framework on diffusion models, called UFID.
Our method achieves superb performance on detection effectiveness and run-time efficiency.
arXiv Detail & Related papers (2024-04-01T13:21:05Z) - Automatic and Universal Prompt Injection Attacks against Large Language
Models [38.694912482525446]
Large Language Models (LLMs) excel in processing and generating human language, powered by their ability to interpret and follow instructions.
These attacks manipulate applications into producing responses aligned with the attacker's injected content, deviating from the user's actual requests.
We introduce a unified framework for understanding the objectives of prompt injection attacks and present an automated gradient-based method for generating highly effective and universal prompt injection data.
arXiv Detail & Related papers (2024-03-07T23:46:20Z) - Test-Time Backdoor Attacks on Multimodal Large Language Models [41.601029747738394]
We present AnyDoor, a test-time backdoor attack against multimodal large language models (MLLMs)
AnyDoor employs similar techniques used in universal adversarial attacks, but distinguishes itself by its ability to decouple the timing of setup and activation of harmful effects.
arXiv Detail & Related papers (2024-02-13T16:28:28Z) - RAT: Reinforcement-Learning-Driven and Adaptive Testing for
Vulnerability Discovery in Web Application Firewalls [1.6903270584134351]
RAT clusters similar attack samples together to discover almost all bypassing attack patterns efficiently.
RAT performs 33.53% and 63.16% on average better than its counterparts in discovering the most possible bypassing payloads.
arXiv Detail & Related papers (2023-12-13T04:07:29Z) - Maatphor: Automated Variant Analysis for Prompt Injection Attacks [7.93367270029538]
Current best-practice for defending against prompt injection techniques is to add additional guardrails to the system.
We present a tool to assist defenders in performing automated variant analysis of known prompt injection attacks.
arXiv Detail & Related papers (2023-12-12T14:22:20Z) - Instruct2Attack: Language-Guided Semantic Adversarial Attacks [76.83548867066561]
Instruct2Attack (I2A) is a language-guided semantic attack that generates meaningful perturbations according to free-form language instructions.
We make use of state-of-the-art latent diffusion models, where we adversarially guide the reverse diffusion process to search for an adversarial latent code conditioned on the input image and text instruction.
We show that I2A can successfully break state-of-the-art deep neural networks even under strong adversarial defenses.
arXiv Detail & Related papers (2023-11-27T05:35:49Z) - Formalizing and Benchmarking Prompt Injection Attacks and Defenses [59.57908526441172]
We propose a framework to formalize prompt injection attacks.
Based on our framework, we design a new attack by combining existing ones.
Our work provides a common benchmark for quantitatively evaluating future prompt injection attacks and defenses.
arXiv Detail & Related papers (2023-10-19T15:12:09Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in
Language Models [41.1058288041033]
We propose ProAttack, a novel and efficient method for performing clean-label backdoor attacks based on the prompt.
Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.
arXiv Detail & Related papers (2023-05-02T06:19:36Z) - Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger [48.59965356276387]
We propose to use syntactic structure as the trigger in textual backdoor attacks.
We conduct extensive experiments to demonstrate that the trigger-based attack method can achieve comparable attack performance.
These results also reveal the significant insidiousness and harmfulness of textual backdoor attacks.
arXiv Detail & Related papers (2021-05-26T08:54:19Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.