Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks
- URL: http://arxiv.org/abs/2403.08208v1
- Date: Wed, 13 Mar 2024 03:10:11 GMT
- Title: Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks
- Authors: Khondoker Murad Hossain, Tim Oates
- Abstract summary: backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
- Score: 3.489779105594534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the rapidly evolving landscape of communication and network security, the
increasing reliance on deep neural networks (DNNs) and cloud services for data
processing presents a significant vulnerability: the potential for backdoors
that can be exploited by malicious actors. Our approach leverages advanced
tensor decomposition algorithms Independent Vector Analysis (IVA), Multiset
Canonical Correlation Analysis (MCCA), and Parallel Factor Analysis (PARAFAC2)
to meticulously analyze the weights of pre-trained DNNs and distinguish between
backdoored and clean models effectively. The key strengths of our method lie in
its domain independence, adaptability to various network architectures, and
ability to operate without access to the training data of the scrutinized
models. This not only ensures versatility across different application
scenarios but also addresses the challenge of identifying backdoors without
prior knowledge of the specific triggers employed to alter network behavior. We
have applied our detection pipeline to three distinct computer vision datasets,
encompassing both image classification and object detection tasks. The results
demonstrate a marked improvement in both accuracy and efficiency over existing
backdoor detection methods. This advancement enhances the security of deep
learning and AI in networked systems, providing essential cybersecurity against
evolving threats in emerging technologies.
Related papers
- Enhanced Convolution Neural Network with Optimized Pooling and Hyperparameter Tuning for Network Intrusion Detection [0.0]
We propose an Enhanced Convolutional Neural Network (EnCNN) for Network Intrusion Detection Systems (NIDS)
We compare EnCNN with various machine learning algorithms, including Logistic Regression, Decision Trees, Support Vector Machines (SVM), and ensemble methods like Random Forest, AdaBoost, and Voting Ensemble.
The results show that EnCNN significantly improves detection accuracy, with a notable 10% increase over state-of-art approaches.
arXiv Detail & Related papers (2024-09-27T11:20:20Z) - Deep Learning Algorithms Used in Intrusion Detection Systems -- A Review [0.0]
This review paper studies recent advancements in the application of deep learning techniques, including CNN, Recurrent Neural Networks (RNN), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Long Short-Term Memory (LSTM), autoencoders (AE), Multi-Layer Perceptrons (MLP), Self-Normalizing Networks (SNN) and hybrid models, within network intrusion detection systems.
arXiv Detail & Related papers (2024-02-26T20:57:35Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.