Adversary Agnostic Robust Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2008.06199v2
- Date: Thu, 24 Dec 2020 06:38:19 GMT
- Title: Adversary Agnostic Robust Deep Reinforcement Learning
- Authors: Xinghua Qu, Yew-Soon Ong, Abhishek Gupta, Zhu Sun
- Abstract summary: Deep reinforcement learning policies are deceived by perturbations during training.
Previous approaches assume that the knowledge of adversaries can be added into the training process.
We propose an adversary robust DRL paradigm that does not require learning from adversaries.
- Score: 23.9114110755044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) policies have been shown to be deceived by
perturbations (e.g., random noise or intensional adversarial attacks) on state
observations that appear at test time but are unknown during training. To
increase the robustness of DRL policies, previous approaches assume that the
knowledge of adversaries can be added into the training process to achieve the
corresponding generalization ability on these perturbed observations. However,
such an assumption not only makes the robustness improvement more expensive but
may also leave a model less effective to other kinds of attacks in the wild. In
contrast, we propose an adversary agnostic robust DRL paradigm that does not
require learning from adversaries. To this end, we first theoretically derive
that robustness could indeed be achieved independently of the adversaries based
on a policy distillation setting. Motivated by this finding, we propose a new
policy distillation loss with two terms: 1) a prescription gap maximization
loss aiming at simultaneously maximizing the likelihood of the action selected
by the teacher policy and the entropy over the remaining actions; 2) a
corresponding Jacobian regularization loss that minimizes the magnitude of the
gradient with respect to the input state. The theoretical analysis shows that
our distillation loss guarantees to increase the prescription gap and the
adversarial robustness. Furthermore, experiments on five Atari games firmly
verify the superiority of our approach in terms of boosting adversarial
robustness compared to other state-of-the-art methods.
Related papers
- Robust off-policy Reinforcement Learning via Soft Constrained Adversary [0.7583052519127079]
We introduce an f-divergence constrained problem with the prior knowledge distribution.
We derive two typical attacks and their corresponding robust learning frameworks.
Results demonstrate that our proposed methods achieve excellent performance in sample-efficient off-policy RL.
arXiv Detail & Related papers (2024-08-31T11:13:33Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Certifying Safety in Reinforcement Learning under Adversarial
Perturbation Attacks [23.907977144668838]
We propose a partially-supervised reinforcement learning (PSRL) framework that takes advantage of an additional assumption that the true state of the POMDP is known at training time.
We present the first approach for certifying safety of PSRL policies under adversarial input perturbations, and two adversarial training approaches that make direct use of PSRL.
arXiv Detail & Related papers (2022-12-28T22:33:38Z) - Improving Adversarial Robustness with Self-Paced Hard-Class Pair
Reweighting [5.084323778393556]
adversarial training with untargeted attacks is one of the most recognized methods.
We find that the naturally imbalanced inter-class semantic similarity makes those hard-class pairs to become the virtual targets of each other.
We propose to upweight hard-class pair loss in model optimization, which prompts learning discriminative features from hard classes.
arXiv Detail & Related papers (2022-10-26T22:51:36Z) - Off-policy Reinforcement Learning with Optimistic Exploration and
Distribution Correction [73.77593805292194]
We train a separate exploration policy to maximize an approximate upper confidence bound of the critics in an off-policy actor-critic framework.
To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy actor-critic training.
arXiv Detail & Related papers (2021-10-22T22:07:51Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.