Adversarial joint attacks on legged robots
- URL: http://arxiv.org/abs/2205.10098v1
- Date: Fri, 20 May 2022 11:30:23 GMT
- Title: Adversarial joint attacks on legged robots
- Authors: Takuto Otomo, Hiroshi Kera, Kazuhiko Kawamoto
- Abstract summary: We address adversarial attacks on the actuators at the joints of legged robots trained by deep reinforcement learning.
In this study, we demonstrate that the adversarial perturbations to the torque control signals of the actuators can significantly reduce the rewards and cause walking instability in robots.
- Score: 3.480626767752489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address adversarial attacks on the actuators at the joints of legged
robots trained by deep reinforcement learning. The vulnerability to the joint
attacks can significantly impact the safety and robustness of legged robots. In
this study, we demonstrate that the adversarial perturbations to the torque
control signals of the actuators can significantly reduce the rewards and cause
walking instability in robots. To find the adversarial torque perturbations, we
develop black-box adversarial attacks, where, the adversary cannot access the
neural networks trained by deep reinforcement learning. The black box attack
can be applied to legged robots regardless of the architecture and algorithms
of deep reinforcement learning. We employ three search methods for the
black-box adversarial attacks: random search, differential evolution, and
numerical gradient descent methods. In experiments with the quadruped robot
Ant-v2 and the bipedal robot Humanoid-v2, in OpenAI Gym environments, we find
that differential evolution can efficiently find the strongest torque
perturbations among the three methods. In addition, we realize that the
quadruped robot Ant-v2 is vulnerable to the adversarial perturbations, whereas
the bipedal robot Humanoid-v2 is robust to the perturbations. Consequently, the
joint attacks can be used for proactive diagnosis of robot walking instability.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg [11.129918951736052]
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios.
In this work, we explore pedipulation - using the legs of a legged robot for manipulation.
arXiv Detail & Related papers (2024-02-16T17:20:45Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial Body Shape Search for Legged Robots [3.480626767752489]
We propose an evolutionary computation method for an adversarial attack on the length and thickness of parts of legged robots.
Finding adversarial body shape can be used to proactively diagnose the vulnerability of legged robot walking.
arXiv Detail & Related papers (2022-05-20T13:55:47Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - A Transferable Legged Mobile Manipulation Framework Based on Disturbance
Predictive Control [15.044159090957292]
Legged mobile manipulation, where a quadruped robot is equipped with a robotic arm, can greatly enhance the performance of the robot.
We propose a unified framework disturbance predictive control where a reinforcement learning scheme with a latent dynamic adapter is embedded into our proposed low-level controller.
arXiv Detail & Related papers (2022-03-02T14:54:10Z) - Learning Control Policies for Fall prevention and safety in bipedal
locomotion [0.0]
We develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots.
Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices.
arXiv Detail & Related papers (2022-01-04T22:00:21Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Fault-Aware Robust Control via Adversarial Reinforcement Learning [35.16413579212691]
We propose an adversarial reinforcement learning framework, which significantly increases robot fragility over joint damage cases.
We validate our algorithm on a three-fingered robot hand and a quadruped robot.
Our algorithm can be trained only in simulation and directly deployed on a real robot without any fine-tuning.
arXiv Detail & Related papers (2020-11-17T16:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.