Flying Adversarial Patches: Manipulating the Behavior of Deep
Learning-based Autonomous Multirotors
- URL: http://arxiv.org/abs/2305.12859v2
- Date: Mon, 31 Jul 2023 10:25:02 GMT
- Title: Flying Adversarial Patches: Manipulating the Behavior of Deep
Learning-based Autonomous Multirotors
- Authors: Pia Hanfeld and Marina M.-C. H\"ohne and Michael Bussmann and Wolfgang
H\"onig
- Abstract summary: Adversarial attacks exploit a neural network's surprising results if applied to input images outside the training domain.
We introduce flying adversarial patches, where an image is mounted on another flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.
For an effective attack, we compare three methods that simultaneously optimize the adversarial patch and its position in the input image.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous flying robots, e.g. multirotors, often rely on a neural network
that makes predictions based on a camera image. These deep learning (DL) models
can compute surprising results if applied to input images outside the training
domain. Adversarial attacks exploit this fault, for example, by computing small
images, so-called adversarial patches, that can be placed in the environment to
manipulate the neural network's prediction. We introduce flying adversarial
patches, where an image is mounted on another flying robot and therefore can be
placed anywhere in the field of view of a victim multirotor. For an effective
attack, we compare three methods that simultaneously optimize the adversarial
patch and its position in the input image. We perform an empirical validation
on a publicly available DL model and dataset for autonomous multirotors.
Ultimately, our attacking multirotor would be able to gain full control over
the motions of the victim multirotor.
Related papers
- Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding [0.20718016474717196]
An adversarial example is a modified input image designed to cause a Machine Learning (ML) model to make a mistake.
This study presents a practical and effective solution -- using predictive coding networks (PCnets) as an auxiliary step for adversarial defence.
arXiv Detail & Related papers (2024-10-31T21:38:05Z) - Kidnapping Deep Learning-based Multirotors using Optimized Flying
Adversarial Patches [0.0]
We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot.
By introducing the attacker robots, the system is extended to an adversarial multi-robot system.
We show that our methods scale well with the number of adversarial patches.
arXiv Detail & Related papers (2023-08-01T07:38:31Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.