Detection of Adversarial Physical Attacks in Time-Series Image Data
- URL: http://arxiv.org/abs/2304.13919v1
- Date: Thu, 27 Apr 2023 02:08:13 GMT
- Title: Detection of Adversarial Physical Attacks in Time-Series Image Data
- Authors: Ramneet Kaur, Yiannis Kantaros, Wenwen Si, James Weimer, Insup Lee
- Abstract summary: We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
- Score: 12.923271427789267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNN) have become a common sensing modality in
autonomous systems as they allow for semantically perceiving the ambient
environment given input images. Nevertheless, DNN models have proven to be
vulnerable to adversarial digital and physical attacks. To mitigate this issue,
several detection frameworks have been proposed to detect whether a single
input image has been manipulated by adversarial digital noise or not. In our
prior work, we proposed a real-time detector, called VisionGuard (VG), for
adversarial physical attacks against single input images to DNN models.
Building upon that work, we propose VisionGuard* (VG), which couples VG with
majority-vote methods, to detect adversarial physical attacks in time-series
image data, e.g., videos. This is motivated by autonomous systems applications
where images are collected over time using onboard sensors for decision-making
purposes. We emphasize that majority-vote mechanisms are quite common in
autonomous system applications (among many other applications), as e.g., in
autonomous driving stacks for object detection. In this paper, we investigate,
both theoretically and experimentally, how this widely used mechanism can be
leveraged to enhance the performance of adversarial detectors. We have
evaluated VG* on videos of both clean and physically attacked traffic signs
generated by a state-of-the-art robust physical attack. We provide extensive
comparative experiments against detectors that have been designed originally
for out-of-distribution data and digitally attacked images.
Related papers
- Exploring the Adversarial Robustness of CLIP for AI-generated Image Detection [9.516391314161154]
We study the adversarial robustness of AI-generated image detectors, focusing on Contrastive Language-Image Pretraining (CLIP)-based methods.
CLIP-based detectors are found to be vulnerable to white-box attacks just like CNN-based detectors.
This analysis provides new insights into the properties of forensic detectors that can help to develop more effective strategies.
arXiv Detail & Related papers (2024-07-28T18:20:08Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Exploring Frequency Adversarial Attacks for Face Forgery Detection [59.10415109589605]
We propose a frequency adversarial attack method against face forgery detectors.
Inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains.
arXiv Detail & Related papers (2022-03-29T15:34:13Z) - Signal Injection Attacks against CCD Image Sensors [20.892354746682223]
We show how electromagnetic emanation can be used to manipulate the image information captured by a CCD image sensor.
Our results indicate that the injected distortion can disrupt automated vision-based intelligent systems.
arXiv Detail & Related papers (2021-08-19T19:05:28Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems [11.752184033538636]
Deep neural network (DNN) models have proven to be vulnerable to adversarial digital and physical attacks.
We propose a novel attack- and dataset-agnostic and real-time detector for both types of adversarial inputs to DNN-based perception systems.
In particular, the proposed detector relies on the observation that adversarial images are sensitive to certain label-invariant transformations.
arXiv Detail & Related papers (2020-02-23T00:03:57Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.