Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary
- URL: http://arxiv.org/abs/2101.08452v1
- Date: Thu, 21 Jan 2021 05:38:52 GMT
- Title: Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary
- Authors: Huan Zhang, Hongge Chen, Duane Boning, Cho-Jui Hsieh
- Abstract summary: We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
- Score: 86.0846119254031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the robustness of reinforcement learning (RL) with adversarially
perturbed state observations, which aligns with the setting of many adversarial
attacks to deep reinforcement learning (DRL) and is also important for rolling
out real-world RL agent under unpredictable sensing noise. With a fixed agent
policy, we demonstrate that an optimal adversary to perturb state observations
can be found, which is guaranteed to obtain the worst case agent reward. For
DRL settings, this leads to a novel empirical adversarial attack to RL agents
via a learned adversary that is much stronger than previous ones. To enhance
the robustness of an agent, we propose a framework of alternating training with
learned adversaries (ATLA), which trains an adversary online together with the
agent using policy gradient following the optimal adversarial attack framework.
Additionally, inspired by the analysis of state-adversarial Markov decision
process (SA-MDP), we show that past states and actions (history) can be useful
for learning a robust agent, and we empirically find a LSTM based policy can be
more robust under adversaries. Empirical evaluations on a few continuous
control environments show that ATLA achieves state-of-the-art performance under
strong adversaries. Our code is available at
https://github.com/huanzhang12/ATLA_robust_RL.
Related papers
- Toward Optimal LLM Alignments Using Two-Player Games [86.39338084862324]
In this paper, we investigate alignment through the lens of two-agent games, involving iterative interactions between an adversarial and a defensive agent.
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
Experimental results in safety scenarios demonstrate that learning in such a competitive environment not only fully trains agents but also leads to policies with enhanced generalization capabilities for both adversarial and defensive agents.
arXiv Detail & Related papers (2024-06-16T15:24:50Z) - Belief-Enriched Pessimistic Q-Learning against Adversarial State
Perturbations [5.076419064097735]
Recent work shows that a well-trained RL agent can be easily manipulated by strategically perturbing its state observations at the test stage.
Existing solutions either introduce a regularization term to improve the smoothness of the trained policy against perturbations or alternatively train the agent's policy and the attacker's policy.
We propose a new robust RL algorithm for deriving a pessimistic policy to safeguard against an agent's uncertainty about true states.
arXiv Detail & Related papers (2024-03-06T20:52:49Z) - Robust Deep Reinforcement Learning Through Adversarial Attacks and
Training : A Survey [8.463282079069362]
Deep Reinforcement Learning (DRL) is an approach for training autonomous agents across various complex environments.
It remains susceptible to minor conditions variations, raising concerns about its reliability in real-world applications.
A way to improve robustness of DRL to unknown changes in the conditions is through Adversarial Training.
arXiv Detail & Related papers (2024-03-01T10:16:46Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion
Attacks in Deep RL [14.702446153750497]
This paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named "actor" and an RL-based learner named "director"
Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces.
arXiv Detail & Related papers (2021-06-09T14:06:53Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.