Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion
Attacks in Deep RL
- URL: http://arxiv.org/abs/2106.05087v5
- Date: Mon, 20 Mar 2023 12:43:45 GMT
- Title: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion
Attacks in Deep RL
- Authors: Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang
- Abstract summary: This paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named "actor" and an RL-based learner named "director"
Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces.
- Score: 14.702446153750497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the worst-case performance of a reinforcement learning (RL) agent
under the strongest/optimal adversarial perturbations on state observations
(within some constraints) is crucial for understanding the robustness of RL
agents. However, finding the optimal adversary is challenging, in terms of both
whether we can find the optimal attack and how efficiently we can find it.
Existing works on adversarial RL either use heuristics-based methods that may
not find the strongest adversary, or directly train an RL-based adversary by
treating the agent as a part of the environment, which can find the optimal
adversary but may become intractable in a large state space. This paper
introduces a novel attacking method to find the optimal attacks through
collaboration between a designed function named "actor" and an RL-based learner
named "director". The actor crafts state perturbations for a given policy
perturbation direction, and the director learns to propose the best policy
perturbation directions. Our proposed algorithm, PA-AD, is theoretically
optimal and significantly more efficient than prior RL-based works in
environments with large state spaces. Empirical results show that our proposed
PA-AD universally outperforms state-of-the-art attacking methods in various
Atari and MuJoCo environments. By applying PA-AD to adversarial training, we
achieve state-of-the-art empirical robustness in multiple tasks under strong
adversaries. The codebase is released at
https://github.com/umd-huang-lab/paad_adv_rl.
Related papers
- Efficient Adversarial Training without Attacking: Worst-Case-Aware
Robust Reinforcement Learning [14.702446153750497]
Worst-case-aware Robust RL (WocaR-RL) is a robust training framework for deep reinforcement learning.
We show that WocaR-RL achieves state-of-the-art performance under various strong attacks.
arXiv Detail & Related papers (2022-10-12T05:24:46Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning [6.414910263179327]
We study reward poisoning attacks on online deep reinforcement learning (DRL)
We demonstrate the intrinsic vulnerability of state-of-the-art DRL algorithms by designing a general, black-box reward poisoning framework called adversarial MDP attacks.
Our results show that our attacks efficiently poison agents learning in several popular classical control and MuJoCo environments.
arXiv Detail & Related papers (2022-05-30T04:07:19Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Real-time Attacks Against Deep Reinforcement Learning Policies [14.085247099075628]
We propose a new attack to fool DRL policies that is both effective and efficient enough to be mounted in real time.
We utilize the Universal Adversarial Perturbation (UAP) method to compute effective perturbations independent of the individual inputs to which they are applied.
arXiv Detail & Related papers (2021-06-16T12:44:59Z) - Provably Efficient Algorithms for Multi-Objective Competitive RL [54.22598924633369]
We study multi-objective reinforcement learning (RL) where an agent's reward is represented as a vector.
In settings where an agent competes against opponents, its performance is measured by the distance of its average return vector to a target set.
We develop statistically and computationally efficient algorithms to approach the associated target set.
arXiv Detail & Related papers (2021-02-05T14:26:00Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.