Enhancing SQL Injection Detection and Prevention Using Generative Models
- URL: http://arxiv.org/abs/2502.04786v1
- Date: Fri, 07 Feb 2025 09:43:43 GMT
- Title: Enhancing SQL Injection Detection and Prevention Using Generative Models
- Authors: Naga Sai Dasari, Atta Badii, Armin Moin, Ahmed Ashlam,
- Abstract summary: This paper introduces an innovative approach that leverages generative models to enhance SQLi detection and prevention mechanisms.
By incorporating Variational Autoencoders (VAE), Conditional Wasserstein GAN with Gradient Penalty (CWGAN-GP), and U-Net, syntheticsql queries were generated to augment training datasets for machine learning models.
- Score: 4.424836140281847
- License:
- Abstract: SQL Injection (SQLi) continues to pose a significant threat to the security of web applications, enabling attackers to manipulate databases and access sensitive information without authorisation. Although advancements have been made in detection techniques, traditional signature-based methods still struggle to identify sophisticated SQL injection attacks that evade predefined patterns. As SQLi attacks evolve, the need for more adaptive detection systems becomes crucial. This paper introduces an innovative approach that leverages generative models to enhance SQLi detection and prevention mechanisms. By incorporating Variational Autoencoders (VAE), Conditional Wasserstein GAN with Gradient Penalty (CWGAN-GP), and U-Net, synthetic SQL queries were generated to augment training datasets for machine learning models. The proposed method demonstrated improved accuracy in SQLi detection systems by reducing both false positives and false negatives. Extensive empirical testing further illustrated the ability of the system to adapt to evolving SQLi attack patterns, resulting in enhanced precision and robustness.
Related papers
- Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs [6.936401700600395]
Large Language Models (LLMs) are increasingly used in software development to generate functions, such as attack detectors, that implement security requirements.
This is most likely due to the LLM lacking knowledge about some existing attacks and to the generated code being not evaluated in real usage scenarios.
We propose a novel approach integrating Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline.
arXiv Detail & Related papers (2024-11-27T10:48:37Z) - Automated Vulnerability Detection Using Deep Learning Technique [1.1710022685486914]
The study demonstrates that deep learning techniques, particularly with CodeBERT's advanced contextual understanding, can significantly improve vulnerability detection.
Our approach transforms source code into vector representations and trains a Long Short-Term Memory (LSTM) model to identify vulnerable patterns.
arXiv Detail & Related papers (2024-10-29T11:51:51Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Unmasking Database Vulnerabilities: Zero-Knowledge Schema Inference Attacks in Text-to-SQL Systems [7.613758211231583]
We introduce a novel zero-knowledge framework for reconstructing the underlying database schema of text-to-generative models without any prior knowledge of the database.
We demonstrate that our method achieves high accuracy in reconstructing table names, with F1 scores of up to.99 for generative models and.78 for fine-tuned models.
We propose a simple protection mechanism for generative models and empirically show its limitations in mitigating these attacks.
arXiv Detail & Related papers (2024-06-20T17:54:33Z) - Protecting Model Adaptation from Trojans in the Unlabeled Data [120.42853706967188]
This paper explores the potential trojan attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named DiffAdapt, which can be seamlessly integrated with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Advancing SQL Injection Detection for High-Speed Data Centers: A Novel Approach Using Cascaded NLP [1.281563056333574]
We introduce a novel cascade SQLi detection method, blending classical and transformer-based NLP models.
Our method achieves a 99.86% detection accuracy with significantly lower computational demands-20 times faster than using transformer-based models alone.
arXiv Detail & Related papers (2023-12-20T14:09:13Z) - Intrusion Detection System with Machine Learning and Multiple Datasets [0.0]
In this paper, an enhanced intrusion detection system (IDS) that utilizes machine learning (ML) is explored.
Ultimately, this improved system can be used to combat the attacks made by unethical hackers.
arXiv Detail & Related papers (2023-12-04T14:58:19Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers [61.48159785138462]
This paper aims to improve the performance of text-to-dependence by exploring the intrinsic uncertainties in the neural network based approaches (called SUN)
Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms competitors and achieves new state-of-the-art results.
arXiv Detail & Related papers (2022-09-14T06:27:51Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.