Topological safeguard for evasion attack interpreting the neural
networks' behavior
- URL: http://arxiv.org/abs/2402.07480v2
- Date: Tue, 13 Feb 2024 09:09:41 GMT
- Title: Topological safeguard for evasion attack interpreting the neural
networks' behavior
- Authors: Xabier Echeberria-Barrio, Amaia Gil-Lerchundi, I\~nigo Mendialdua,
Raul Orduna-Urrutia
- Abstract summary: In this work, a novel detector of evasion attacks is developed.
It focuses on the information of the activations of the neurons given by the model when an input sample is injected.
For this purpose, a huge data preprocessing is required to introduce all this information in the detector.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last years, Deep Learning technology has been proposed in different
fields, bringing many advances in each of them, but identifying new threats in
these solutions regarding cybersecurity. Those implemented models have brought
several vulnerabilities associated with Deep Learning technology. Moreover,
those allow taking advantage of the implemented model, obtaining private
information, and even modifying the model's decision-making. Therefore,
interest in studying those vulnerabilities/attacks and designing defenses to
avoid or fight them is gaining prominence among researchers. In particular, the
widely known evasion attack is being analyzed by researchers; thus, several
defenses to avoid such a threat can be found in the literature. Since the
presentation of the L-BFG algorithm, this threat concerns the research
community. However, it continues developing new and ingenious countermeasures
since there is no perfect defense for all the known evasion algorithms. In this
work, a novel detector of evasion attacks is developed. It focuses on the
information of the activations of the neurons given by the model when an input
sample is injected. Moreover, it puts attention to the topology of the targeted
deep learning model to analyze the activations according to which neurons are
connecting. This approach has been decided because the literature shows that
the targeted model's topology contains essential information about if the
evasion attack occurs. For this purpose, a huge data preprocessing is required
to introduce all this information in the detector, which uses the Graph
Convolutional Neural Network (GCN) technology. Thus, it understands the
topology of the target model, obtaining promising results and improving the
outcomes presented in the literature related to similar defenses.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Understanding Deep Learning defenses Against Adversarial Examples
Through Visualizations for Dynamic Risk Assessment [0.0]
Adversarial training, dimensionality reduction and prediction similarity were selected as defenses against adversarial example attack.
In each defense, the behavior of the original model has been compared with the behavior of the defended model, representing the target model by a graph in a visualization.
arXiv Detail & Related papers (2024-02-12T09:05:01Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Intrusion Detection: A Deep Learning Approach [0.0]
The paper proposes a novel architecture to combat intrusion detection that has a Convolutional Neural Network (CNN) module, along with a Long Short Term Memory(LSTM) module and a Support Vector Machine (SVM) classification function.
The analysis is followed by a comparison of both conventional machine learning techniques and deep learning methodologies, which highlights areas that could be further explored.
arXiv Detail & Related papers (2023-06-13T07:58:40Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
Models [3.9962751777898955]
Deep learning algorithms have been recently targeted by attackers due to their vulnerability.
Non-continuous deep models are still not robust against adversarial attacks.
We propose a novel objective/loss function, which enforces the features to lie under a specified margin to facilitate their prediction.
arXiv Detail & Related papers (2020-12-08T20:51:43Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.