AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems
- URL: http://arxiv.org/abs/2303.01338v2
- Date: Thu, 5 Oct 2023 11:55:37 GMT
- Title: AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems
- Authors: Amira Guesmi, Muhammad Abdullah Hanif, and Muhammad Shafique
- Abstract summary: "printed adversarial attacks", known as physical adversarial attacks, can successfully mislead perception models.
We propose a camera-based adversarial attack capable of fooling camera-based perception systems over all objects of the same class.
We achieve a drop in average model accuracy of more than $45%$ and $40%$ on VGG19 for ImageNet and Resnet34 for Caltech.
- Score: 5.476763798688862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based perception modules are increasingly deployed in many
applications, especially autonomous vehicles and intelligent robots. These
modules are being used to acquire information about the surroundings and
identify obstacles. Hence, accurate detection and classification are essential
to reach appropriate decisions and take appropriate and safe actions at all
times. Current studies have demonstrated that "printed adversarial attacks",
known as physical adversarial attacks, can successfully mislead perception
models such as object detectors and image classifiers. However, most of these
physical attacks are based on noticeable and eye-catching patterns for
generated perturbations making them identifiable/detectable by human eye or in
test drives. In this paper, we propose a camera-based inconspicuous adversarial
attack (\textbf{AdvRain}) capable of fooling camera-based perception systems
over all objects of the same class. Unlike mask based fake-weather attacks that
require access to the underlying computing hardware or image memory, our attack
is based on emulating the effects of a natural weather condition (i.e.,
Raindrops) that can be printed on a translucent sticker, which is externally
placed over the lens of a camera. To accomplish this, we provide an iterative
process based on performing a random search aiming to identify critical
positions to make sure that the performed transformation is adversarial for a
target classifier. Our transformation is based on blurring predefined parts of
the captured image corresponding to the areas covered by the raindrop. We
achieve a drop in average model accuracy of more than $45\%$ and $40\%$ on
VGG19 for ImageNet and Resnet34 for Caltech-101, respectively, using only $20$
raindrops.
Related papers
- Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection [33.819549876354515]
This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
arXiv Detail & Related papers (2024-07-23T09:22:06Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive
Benchmark Analysis and Beyond [85.06231315901505]
Rain removal aims to remove rain streaks from images/videos and reduce the disruptive effects caused by rain.
This paper makes the first attempt to conduct a comprehensive study on the robustness of deep learning-based rain removal methods against adversarial attacks.
arXiv Detail & Related papers (2022-03-31T10:22:24Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - Robust SleepNets [7.23389716633927]
In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness.
We develop two models to detect eye closedness: first model on eye images and a second model on face images.
We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate.
arXiv Detail & Related papers (2021-02-24T20:48:13Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z) - GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems [6.637193297008101]
In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
arXiv Detail & Related papers (2020-01-21T21:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.