Physical Adversarial Examples for Multi-Camera Systems
- URL: http://arxiv.org/abs/2311.08539v1
- Date: Tue, 14 Nov 2023 21:04:49 GMT
- Title: Physical Adversarial Examples for Multi-Camera Systems
- Authors: Ana R\u{a}du\c{t}oiu and Jan-Philipp Schulze and Philip Sperl and
Konstantin B\"ottinger
- Abstract summary: We evaluate robustness of multi-camera setups against physical adversarial examples.
Transcender-MC is 11% more effective in successfully attacking multi-camera setups than state-of-the-art methods.
- Score: 2.3759432635713895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks build the foundation of several intelligent systems, which,
however, are known to be easily fooled by adversarial examples. Recent advances
made these attacks possible even in air-gapped scenarios, where the autonomous
system observes its surroundings by, e.g., a camera. We extend these ideas in
our research and evaluate the robustness of multi-camera setups against such
physical adversarial examples. This scenario becomes ever more important with
the rise in popularity of autonomous vehicles, which fuse the information of
several cameras for their driving decision. While we find that multi-camera
setups provide some robustness towards past attack methods, we see that this
advantage reduces when optimizing on multiple perspectives at once. We propose
a novel attack method that we call Transcender-MC, where we incorporate online
3D renderings and perspective projections in the training process. Moreover, we
motivate that certain data augmentation techniques can facilitate the
generation of successful adversarial examples even further. Transcender-MC is
11% more effective in successfully attacking multi-camera setups than
state-of-the-art methods. Our findings offer valuable insights regarding the
resilience of object detection in a setup with multiple cameras and motivate
the need of developing adequate defense mechanisms against them.
Related papers
- Modeling Electromagnetic Signal Injection Attacks on Camera-based Smart Systems: Applications and Mitigation [18.909937495767313]
electromagnetic waves pose a threat to safety- or security-critical systems.
Such attacks enable attackers to manipulate the images remotely, leading to incorrect AI decisions.
We present a pilot study on adversarial training to improve their robustness against attacks.
arXiv Detail & Related papers (2024-08-09T15:33:28Z) - Data Forensics in Diffusion Models: A Systematic Analysis of Membership
Privacy [62.16582309504159]
We develop a systematic analysis of membership inference attacks on diffusion models and propose novel attack methods tailored to each attack scenario.
Our approach exploits easily obtainable quantities and is highly effective, achieving near-perfect attack performance (>0.9 AUCROC) in realistic scenarios.
arXiv Detail & Related papers (2023-02-15T17:37:49Z) - On the Adversarial Robustness of Camera-based 3D Object Detection [21.091078268929667]
We investigate the robustness of leading camera-based 3D object detection approaches under various adversarial conditions.
We find that bird's-eye-view-based representations exhibit stronger robustness against localization attacks.
depth-estimation-free approaches have the potential to show stronger robustness.
incorporating multi-frame benign inputs can effectively mitigate adversarial attacks.
arXiv Detail & Related papers (2023-01-25T18:59:15Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in
CMOS Image Sensors [21.5487020124302]
A camera's electronic rolling shutter can be exploited to inject fine-grained image disruptions.
We show how an adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors.
Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems.
arXiv Detail & Related papers (2021-01-25T11:14:25Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - GhostImage: Remote Perception Attacks against Camera-based Image
Classification Systems [6.637193297008101]
In vision-based object classification systems imaging sensors perceive the environment and machine learning is then used to detect and classify objects for decision-making purposes.
We demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object.
arXiv Detail & Related papers (2020-01-21T21:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.