GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems
- URL: http://arxiv.org/abs/2001.07792v3
- Date: Tue, 23 Jun 2020 20:13:52 GMT
- Title: GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems
- Authors: Yanmao Man, Ming Li, Ryan Gerdes
- Abstract summary: In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
- Score: 6.637193297008101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In vision-based object classification systems imaging sensors perceive the
environment and machine learning is then used to detect and classify objects
for decision-making purposes; e.g., to maneuver an automated vehicle around an
obstacle or to raise an alarm to indicate the presence of an intruder in
surveillance settings. In this work we demonstrate how the perception domain
can be remotely and unobtrusively exploited to enable an attacker to create
spurious objects or alter an existing object. An automated system relying on a
detection/classification framework subject to our attack could be made to
undertake actions with catastrophic results due to attacker-induced
misperception.
We focus on camera-based systems and show that it is possible to remotely
project adversarial patterns into camera systems by exploiting two common
effects in optical imaging systems, viz., lens flare/ghost effects and
auto-exposure control. To improve the robustness of the attack to channel
effects, we generate optimal patterns by integrating adversarial machine
learning techniques with a trained end-to-end channel model. We experimentally
demonstrate our attacks using a low-cost projector, on three different image
datasets, in indoor and outdoor environments, and with three different cameras.
Experimental results show that, depending on the projector-camera distance,
attack success rates can reach as high as 100% and under targeted conditions.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Understanding Impacts of Electromagnetic Signal Injection Attacks on Object Detection [33.819549876354515]
This paper quantifies and analyzes the impacts of cyber-physical attacks on object detection models in practice.
Images captured by image sensors may be affected by different factors in real applications, including cyber-physical attacks.
arXiv Detail & Related papers (2024-07-23T09:22:06Z) - Physical Adversarial Examples for Multi-Camera Systems [2.3759432635713895]
We evaluate robustness of multi-camera setups against physical adversarial examples.
Transcender-MC is 11% more effective in successfully attacking multi-camera setups than state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T21:04:49Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - On the Adversarial Robustness of Camera-based 3D Object Detection [21.091078268929667]
We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
arXiv Detail & Related papers (2023-01-25T18:59:15Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Coordinate-Aligned Multi-Camera Collaboration for Active Multi-Object
Tracking [114.16306938870055]
We propose a coordinate-aligned multi-camera collaboration system for AMOT.
In our approach, we regard each camera as an agent and address AMOT with a multi-agent reinforcement learning solution.
Our system achieves a coverage of 71.88%, outperforming the baseline method by 8.9%.
arXiv Detail & Related papers (2022-02-22T13:28:40Z) - They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in
CMOS Image Sensors [21.5487020124302]
A camera's electronic rolling shutter can be exploited to inject fine-grained image disruptions.
We show how an adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors.
Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.
arXiv Detail & Related papers (2021-01-25T11:14:25Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Unadversarial Examples: Designing Objects for Robust Vision [100.4627585672469]
We develop a framework that exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design "robust objects"
We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks to (in-simulation) robotics.
arXiv Detail & Related papers (2020-12-22T18:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.