Anomaly Detection of Test-Time Evasion Attacks using Class-conditional
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2105.10101v1
- Date: Fri, 21 May 2021 02:51:58 GMT
- Title: Anomaly Detection of Test-Time Evasion Attacks using Class-conditional
Generative Adversarial Networks
- Authors: Hang Wang, David J. Miller, George Kesidis
- Abstract summary: We propose an attack detector based on classconditional Generative Adversaratives (GAN)
We model the distribution of clean data conditioned on a predicted class label by an Auxiliary GAN (ACGAN)
Experiments on image classification datasets under different TTE attack methods show that our method outperforms state-of-the-art detection methods.
- Score: 21.023722317810805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been shown vulnerable to adversarial
(Test-Time Evasion (TTE)) attacks which, by making small changes to the input,
alter the DNN's decision. We propose an attack detector based on
class-conditional Generative Adversarial Networks (GANs). We model the
distribution of clean data conditioned on the predicted class label by an
Auxiliary Classifier GAN (ACGAN). Given a test sample and its predicted class,
three detection statistics are calculated using the ACGAN Generator and
Discriminator. Experiments on image classification datasets under different TTE
attack methods show that our method outperforms state-of-the-art detection
methods. We also investigate the effectiveness of anomaly detection using
different DNN layers (input features or internal-layer features) and
demonstrate that anomalies are harder to detect using features closer to the
DNN's output layer.
Related papers
- DOC-NAD: A Hybrid Deep One-class Classifier for Network Anomaly
Detection [0.0]
Machine Learning approaches have been used to enhance the detection capabilities of Network Intrusion Detection Systems (NIDSs)
Recent work has achieved near-perfect performance by following binary- and multi-class network anomaly detection tasks.
This paper proposes a Deep One-Class (DOC) classifier for network intrusion detection by only training on benign network data samples.
arXiv Detail & Related papers (2022-12-15T00:08:05Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Training a Bidirectional GAN-based One-Class Classifier for Network
Intrusion Detection [8.158224495708978]
Existing generative adversarial networks (GANs) are primarily used for creating synthetic samples from reals.
In our proposed method, we construct the trained encoder-discriminator as a one-class classifier based on Bidirectional GAN (Bi-GAN)
Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks.
arXiv Detail & Related papers (2022-02-02T23:51:11Z) - AntidoteRT: Run-time Detection and Correction of Poison Attacks on
Neural Networks [18.461079157949698]
backdoor poisoning attacks against image classification networks.
We propose lightweight automated detection and correction techniques against poisoning attacks.
Our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks.
arXiv Detail & Related papers (2022-01-31T23:42:32Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Contextual-Bandit Anomaly Detection for IoT Data in Distributed
Hierarchical Edge Computing [65.78881372074983]
IoT devices can hardly afford complex deep neural networks (DNN) models, and offloading anomaly detection tasks to the cloud incurs long delay.
We propose and build a demo for an adaptive anomaly detection approach for distributed hierarchical edge computing (HEC) systems.
We show that our proposed approach significantly reduces detection delay without sacrificing accuracy, as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-04-15T06:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.