Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems
- URL: http://arxiv.org/abs/2002.09792v2
- Date: Thu, 21 Apr 2022 22:20:39 GMT
- Title: Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems
- Authors: Yiannis Kantaros, Taylor Carpenter, Kaustubh Sridhar, Yahan Yang,
Insup Lee, James Weimer
- Abstract summary: Deep neural network (DNN) models have proven to be vulnerable to adversarial digital and physical attacks.
We propose a novel attack- and dataset-agnostic and real-time detector for both types of adversarial inputs to DNN-based perception systems.
In particular, the proposed detector relies on the observation that adversarial images are sensitive to certain label-invariant transformations.
- Score: 11.752184033538636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN) models have proven to be vulnerable to adversarial
digital and physical attacks. In this paper, we propose a novel attack- and
dataset-agnostic and real-time detector for both types of adversarial inputs to
DNN-based perception systems. In particular, the proposed detector relies on
the observation that adversarial images are sensitive to certain
label-invariant transformations. Specifically, to determine if an image has
been adversarially manipulated, the proposed detector checks if the output of
the target classifier on a given input image changes significantly after
feeding it a transformed version of the image under investigation. Moreover, we
show that the proposed detector is computationally-light both at runtime and
design-time which makes it suitable for real-time applications that may also
involve large-scale image domains. To highlight this, we demonstrate the
efficiency of the proposed detector on ImageNet, a task that is computationally
challenging for the majority of relevant defenses, and on physically attacked
traffic signs that may be encountered in real-time autonomy applications.
Finally, we propose the first adversarial dataset, called AdvNet that includes
both clean and physical traffic sign images. Our extensive comparative
experiments on the MNIST, CIFAR10, ImageNet, and AdvNet datasets show that
VisionGuard outperforms existing defenses in terms of scalability and detection
performance. We have also evaluated the proposed detector on field test data
obtained on a moving vehicle equipped with a perception-based DNN being under
attack.
Related papers
- PrObeD: Proactive Object Detection Wrapper [15.231600709902127]
PrObeD consists of an encoder-decoder architecture, where the encoder network generates an image-dependent signal templates to encrypt the input images.
We propose a wrapper based on proactive schemes, PrObeD, which enhances the performance of these object detectors by learning a signal.
Our experiments on MS-COCO, CAMO, COD$10$K, and NC$4$K datasets show improvement over different detectors after applying PrObeD.
arXiv Detail & Related papers (2023-10-28T19:25:01Z) - Investigating the Robustness and Properties of Detection Transformers
(DETR) Toward Difficult Images [1.5727605363545245]
Transformer-based object detectors (DETR) have shown significant performance across machine vision tasks.
The critical issue to be addressed is how this model architecture can handle different image nuisances.
We studied this issue by measuring the performance of DETR with different experiments and benchmarking the network.
arXiv Detail & Related papers (2023-10-12T23:38:52Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z) - Background Adaptive Faster R-CNN for Semi-Supervised Convolutional
Object Detection of Threats in X-Ray Images [64.39996451133268]
We present a semi-supervised approach for threat recognition which we call Background Adaptive Faster R-CNN.
This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning.
Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information.
This can reduce threat detection false alarm rates by matching the statistics of extracted features from hand-collected backgrounds to real world data.
arXiv Detail & Related papers (2020-10-02T21:05:13Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Efficient detection of adversarial images [2.6249027950824506]
Some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to the human eye.
This paper first proposes a novel pre-processing technique that facilitates the detection of such modified images.
An adaptive version of this algorithm is proposed where a random number of perturbations are chosen adaptively.
arXiv Detail & Related papers (2020-07-09T05:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.