Kidnapping Deep Learning-based Multirotors using Optimized Flying
Adversarial Patches
- URL: http://arxiv.org/abs/2308.00344v2
- Date: Mon, 23 Oct 2023 11:25:02 GMT
- Title: Kidnapping Deep Learning-based Multirotors using Optimized Flying
Adversarial Patches
- Authors: Pia Hanfeld, Khaled Wahba, Marina M.-C. H\"ohne, Michael Bussmann,
Wolfgang H\"onig
- Abstract summary: We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot.
By introducing the attacker robots, the system is extended to an adversarial multi-robot system.
We show that our methods scale well with the number of adversarial patches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous flying robots, such as multirotors, often rely on deep learning
models that make predictions based on a camera image, e.g. for pose estimation.
These models can predict surprising results if applied to input images outside
the training domain. This fault can be exploited by adversarial attacks, for
example, by computing small images, so-called adversarial patches, that can be
placed in the environment to manipulate the neural network's prediction. We
introduce flying adversarial patches, where multiple images are mounted on at
least one other flying robot and therefore can be placed anywhere in the field
of view of a victim multirotor. By introducing the attacker robots, the system
is extended to an adversarial multi-robot system. For an effective attack, we
compare three methods that simultaneously optimize multiple adversarial patches
and their position in the input image. We show that our methods scale well with
the number of adversarial patches. Moreover, we demonstrate physical flights
with two robots, where we employ a novel attack policy that uses the computed
adversarial patches to kidnap a robot that was supposed to follow a human.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Flying Adversarial Patches: Manipulating the Behavior of Deep
Learning-based Autonomous Multirotors [0.0]
Adversarial attacks exploit a neural network's surprising results if applied to input images outside the training domain.
We introduce flying adversarial patches, where an image is mounted on another flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.
For an effective attack, we compare three methods that simultaneously optimize the adversarial patch and its position in the input image.
arXiv Detail & Related papers (2023-05-22T09:35:21Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Adversarial joint attacks on legged robots [3.480626767752489]
We address adversarial attacks on the actuators at the joints of legged robots trained by deep reinforcement learning.
In this study, we demonstrate that the adversarial perturbations to the torque control signals of the actuators can significantly reduce the rewards and cause walking instability in robots.
arXiv Detail & Related papers (2022-05-20T11:30:23Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Automating Defense Against Adversarial Attacks: Discovery of
Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed
Models [0.0]
We evaluate the use of multi-spectral image arrays and ensemble learners to combat adversarial attacks.
In rough analogy to defending cyber-networks, we combine techniques from both offensive ("red team) and defensive ("blue team") approaches.
arXiv Detail & Related papers (2021-03-29T19:07:55Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.