Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
- URL: http://arxiv.org/abs/2403.00420v2
- Date: Wed, 11 Dec 2024 15:03:08 GMT
- Title: Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
- Authors: Lucas Schott, Josephine Delas, Hatem Hajri, Elies Gherbi, Reda Yaich, Nora Boulahia-Cuppens, Frederic Cuppens, Sylvain Lamprier,
- Abstract summary: Deep Reinforcement Learning (DRL) is a subfield of machine learning for training autonomous agents that take sequential actions across complex environments.
It remains susceptible to minor condition variations, raising concerns about its reliability in real-world applications.
A way to improve robustness of DRL to unknown changes in the environmental conditions and possible perturbations is through Adversarial Training.
- Score: 8.1138182541639
- License:
- Abstract: Deep Reinforcement Learning (DRL) is a subfield of machine learning for training autonomous agents that take sequential actions across complex environments. Despite its significant performance in well-known environments, it remains susceptible to minor condition variations, raising concerns about its reliability in real-world applications. To improve usability, DRL must demonstrate trustworthiness and robustness. A way to improve the robustness of DRL to unknown changes in the environmental conditions and possible perturbations is through Adversarial Training, by training the agent against well-suited adversarial attacks on the observations and the dynamics of the environment. Addressing this critical issue, our work presents an in-depth analysis of contemporary adversarial attack and training methodologies, systematically categorizing them and comparing their objectives and operational mechanisms.
Related papers
- On the Effectiveness of Adversarial Training on Malware Classifiers [14.069462668836328]
Adversarial Training (AT) has been widely applied to harden learning-based classifiers against adversarial evasive attacks.
Previous work seems to suggest robustness is a task-dependent property of AT.
We argue it is a more complex problem that requires exploring AT and the intertwined roles played by certain factors within data.
arXiv Detail & Related papers (2024-12-24T06:55:53Z) - Mitigating Adversarial Perturbations for Deep Reinforcement Learning via Vector Quantization [18.56608399174564]
Well-performing reinforcement learning (RL) agents often lack resilience against adversarial perturbations during deployment.
This highlights the importance of building a robust agent before deploying it in the real world.
In this work, we study an input transformation-based defense for RL.
arXiv Detail & Related papers (2024-10-04T12:41:54Z) - Enhancing Autonomous Vehicle Training with Language Model Integration and Critical Scenario Generation [32.02261963851354]
CRITICAL is a novel closed-loop framework for autonomous vehicle (AV) training and testing.
The framework achieves this by integrating real-world traffic dynamics, driving behavior analysis, surrogate safety measures, and an optional Large Language Model (LLM) component.
arXiv Detail & Related papers (2024-04-12T16:13:10Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Dynamics Generalization via Information Bottleneck in Deep Reinforcement
Learning [90.93035276307239]
We propose an information theoretic regularization objective and an annealing-based optimization method to achieve better generalization ability in RL agents.
We demonstrate the extreme generalization benefits of our approach in different domains ranging from maze navigation to robotic tasks.
This work provides a principled way to improve generalization in RL by gradually removing information that is redundant for task-solving.
arXiv Detail & Related papers (2020-08-03T02:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.