Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2307.14568v1
- Date: Thu, 27 Jul 2023 01:04:57 GMT
- Title: Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning
- Authors: Brian Angulo, Gregory Gorbov, Aleksandr Panov, Konstantin Yakovlev
- Abstract summary: We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While reinforcement learning algorithms have had great success in the field
of autonomous navigation, they cannot be straightforwardly applied to the real
autonomous systems without considering the safety constraints. The later are
crucial to avoid unsafe behaviors of the autonomous vehicle on the road. To
highlight the importance of these constraints, in this study, we compare two
learnable navigation policies: safe and unsafe. The safe policy takes the
constraints into account, while the other does not. We show that the safe
policy is able to generate trajectories with more clearance (distance to the
obstacles) and makes less collisions while training without sacrificing the
overall performance.
Related papers
- Safe Policy Exploration Improvement via Subgoals [44.07721205323709]
Reinforcement learning is a widely used approach to autonomous navigation, showing potential in various tasks and robotic setups.
One of the main reasons for poor performance in such setups is that the need to respect the safety constraints degrades the exploration capabilities of an RL agent.
We introduce a novel learnable algorithm that is based on decomposing the initial problem into smaller sub-problems via intermediate goals.
arXiv Detail & Related papers (2024-08-25T16:12:49Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - A Survey on Reinforcement Learning Security with Application to
Autonomous Driving [23.2255446652987]
Reinforcement learning allows machines to learn from their own experience.
It is used in safety-critical applications, such as autonomous driving.
We discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
arXiv Detail & Related papers (2022-12-12T18:50:49Z) - How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for
Efficient and Safe Driving Strategies [1.496194593196997]
This paper proposes SafeDQN, which allows to make the behavior of autonomous vehicles safe and interpretable while still being efficient.
We show that SafeDQN finds interpretable and safe driving policies for a variety of scenarios and demonstrate how state-of-the-art saliency techniques can help to assess both risk and utility.
arXiv Detail & Related papers (2022-03-16T05:51:22Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - SAFER: Data-Efficient and Safe Reinforcement Learning via Skill
Acquisition [59.94644674087599]
We propose SAFEty skill pRiors (SAFER), an algorithm that accelerates policy learning on complex control tasks under safety constraints.
Through principled training on an offline dataset, SAFER learns to extract safe primitive skills.
In the inference stage, policies trained with SAFER learn to compose safe skills into successful policies.
arXiv Detail & Related papers (2022-02-10T05:43:41Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z) - Conservative Safety Critics for Exploration [120.73241848565449]
We study the problem of safe exploration in reinforcement learning (RL)
We learn a conservative safety estimate of environment states through a critic.
We show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates.
arXiv Detail & Related papers (2020-10-27T17:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.