Adversarial ModSecurity: Countering Adversarial SQL Injections with
Robust Machine Learning
- URL: http://arxiv.org/abs/2308.04964v2
- Date: Thu, 17 Aug 2023 09:08:49 GMT
- Title: Adversarial ModSecurity: Countering Adversarial SQL Injections with
Robust Machine Learning
- Authors: Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide
Ariu, Luca Piras, Davide Balzarotti, Battista Biggio
- Abstract summary: ModSecurity is widely recognized as the standard open-source Web Application Firewall (WAF)
We develop a robust machine learning model, named AdvModSec, which uses the Core Rule Set ( CRS) rules as input features.
Our experiments show that AdvModSec, being trained on the traffic directed towards the protected web services, achieves a better trade-off between detection and false positive rates.
- Score: 16.09513503181256
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: ModSecurity is widely recognized as the standard open-source Web Application
Firewall (WAF), maintained by the OWASP Foundation. It detects malicious
requests by matching them against the Core Rule Set, identifying well-known
attack patterns. Each rule in the CRS is manually assigned a weight, based on
the severity of the corresponding attack, and a request is detected as
malicious if the sum of the weights of the firing rules exceeds a given
threshold. In this work, we show that this simple strategy is largely
ineffective for detecting SQL injection (SQLi) attacks, as it tends to block
many legitimate requests, while also being vulnerable to adversarial SQLi
attacks, i.e., attacks intentionally manipulated to evade detection. To
overcome these issues, we design a robust machine learning model, named
AdvModSec, which uses the CRS rules as input features, and it is trained to
detect adversarial SQLi attacks. Our experiments show that AdvModSec, being
trained on the traffic directed towards the protected web services, achieves a
better trade-off between detection and false positive rates, improving the
detection rate of the vanilla version of ModSecurity with CRS by 21%. Moreover,
our approach is able to improve its adversarial robustness against adversarial
SQLi attacks by 42%, thereby taking a step forward towards building more robust
and trustworthy WAFs.
Related papers
- Enhancing SQL Injection Detection and Prevention Using Generative Models [4.424836140281847]
This paper introduces an innovative approach that leverages generative models to enhance SQLi detection and prevention mechanisms.
By incorporating Variational Autoencoders (VAE), Conditional Wasserstein GAN with Gradient Penalty (CWGAN-GP), and U-Net, syntheticsql queries were generated to augment training datasets for machine learning models.
arXiv Detail & Related papers (2025-02-07T09:43:43Z) - Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning [93.44927301021688]
Website fingerprint (WF) attacks covertly monitor user communications to identify the web pages they visit.
Existing WF defenses attempt to reduce the attacker's accuracy by disrupting unique traffic patterns.
We introduce Controllable Website Fingerprint Defense (CWFD), a novel defense perspective based on backdoor learning.
arXiv Detail & Related papers (2024-12-16T06:12:56Z) - Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs [6.936401700600395]
Large Language Models (LLMs) are increasingly used in software development to generate functions, such as attack detectors, that implement security requirements.
This is most likely due to the LLM lacking knowledge about some existing attacks and to the generated code being not evaluated in real usage scenarios.
We propose a novel approach integrating Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline.
arXiv Detail & Related papers (2024-11-27T10:48:37Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - ModSec-Learn: Boosting ModSecurity with Machine Learning [14.392409275321528]
ModSecurity is widely recognized as the standard open-source Web Application Firewall (WAF)
We propose a machine-learning model that uses the Core Rule Set (CRS) rules as input features.
ModSec-Learn is able to tune the contribution of each CRS rule to predictions, thus adapting the severity level to the web applications to protect.
arXiv Detail & Related papers (2024-06-19T13:32:47Z) - Dissecting Adversarial Robustness of Multimodal LM Agents [70.2077308846307]
We manually create 200 targeted adversarial tasks and evaluation scripts in a realistic threat model on top of VisualWebArena.
We find that we can successfully break latest agents that use black-box frontier LMs, including those that perform reflection and tree search.
We also use ARE to rigorously evaluate how the robustness changes as new components are added.
arXiv Detail & Related papers (2024-06-18T17:32:48Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - AdvSQLi: Generating Adversarial SQL Injections against Real-world WAF-as-a-service [41.557003808027204]
With the development of cloud computing, WAF-as-a-service has been proposed to facilitate the deployment, configuration, and update of WAFs in the cloud.
Despite its tremendous popularity, the security vulnerabilities of WAF-as-a-service are still largely unknown.
With Advi, we make it feasible to inspect and understand the security vulnerabilities of WAFs automatically, helping vendors make products more secure.
arXiv Detail & Related papers (2024-01-05T03:21:29Z) - RAT: Reinforcement-Learning-Driven and Adaptive Testing for
Vulnerability Discovery in Web Application Firewalls [1.6903270584134351]
RAT clusters similar attack samples together to discover almost all bypassing attack patterns efficiently.
RAT performs 33.53% and 63.16% on average better than its counterparts in discovering the most possible bypassing payloads.
arXiv Detail & Related papers (2023-12-13T04:07:29Z) - Confidence-driven Sampling for Backdoor Attacks [49.72680157684523]
Backdoor attacks aim to surreptitiously insert malicious triggers into DNN models, granting unauthorized control during testing scenarios.
Existing methods lack robustness against defense strategies and predominantly focus on enhancing trigger stealthiness while randomly selecting poisoned samples.
We introduce a straightforward yet highly effective sampling methodology that leverages confidence scores. Specifically, it selects samples with lower confidence scores, significantly increasing the challenge for defenders in identifying and countering these attacks.
arXiv Detail & Related papers (2023-10-08T18:57:36Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework [22.259444589459513]
Malicious advertisement URLs pose a security risk since they are the source of cyber-attacks.
Existing malicious URL detection techniques are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and employ machine learning technique to set up system for fraudulent advertisement URLs detection.
arXiv Detail & Related papers (2022-04-27T20:06:22Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Learning to Detect Malicious Clients for Robust Federated Learning [20.5238037608738]
Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
arXiv Detail & Related papers (2020-02-01T14:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.