Enhancing Navigational Safety in Crowded Environments using
Semantic-Deep-Reinforcement-Learning-based Navigation
- URL: http://arxiv.org/abs/2109.11288v1
- Date: Thu, 23 Sep 2021 10:50:47 GMT
- Title: Enhancing Navigational Safety in Crowded Environments using
Semantic-Deep-Reinforcement-Learning-based Navigation
- Authors: Linh K\"astner, Junhui Li, Zhengcheng Shen, and Jens Lambrecht
- Abstract summary: We propose a semantic Deep-reinforcement-learning-based navigation approach that teaches object-specific safety rules by considering high-level obstacle information.
We demonstrate that the agent could learn to navigate more safely by keeping an individual safety distance dependent on the semantic information.
- Score: 5.706538676509249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent navigation among social crowds is an essential aspect of mobile
robotics for applications such as delivery, health care, or assistance. Deep
Reinforcement Learning emerged as an alternative planning method to
conservative approaches and promises more efficient and flexible navigation.
However, in highly dynamic environments employing different kinds of obstacle
classes, safe navigation still presents a grand challenge. In this paper, we
propose a semantic Deep-reinforcement-learning-based navigation approach that
teaches object-specific safety rules by considering high-level obstacle
information. In particular, the agent learns object-specific behavior by
contemplating the specific danger zones to enhance safety around vulnerable
object classes. We tested the approach against a benchmark obstacle avoidance
approach and found an increase in safety. Furthermore, we demonstrate that the
agent could learn to navigate more safely by keeping an individual safety
distance dependent on the semantic information.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - Holistic Deep-Reinforcement-Learning-based Training of Autonomous
Navigation Systems [4.409836695738518]
Deep Reinforcement Learning emerged as a promising approach for autonomous navigation of ground vehicles.
In this paper, we propose a holistic Deep Reinforcement Learning training approach involving all entities of the navigation stack.
arXiv Detail & Related papers (2023-02-06T16:52:15Z) - Do Androids Dream of Electric Fences? Safety-Aware Reinforcement
Learning with Latent Shielding [18.54615448101203]
We present a novel approach to safety-aware deep reinforcement learning in high-dimensional environments called latent shielding.
Latent shielding leverages internal representations of the environment learnt by model-based agents to "imagine" future trajectories and avoid those deemed unsafe.
arXiv Detail & Related papers (2021-12-21T19:11:34Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Adversarial Reinforced Instruction Attacker for Robust Vision-Language
Navigation [145.84123197129298]
Language instruction plays an essential role in the natural language grounded navigation tasks.
We exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction.
Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target.
arXiv Detail & Related papers (2021-07-23T14:11:31Z) - Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with
Conventional Global Planners using Waypoint Generators [1.4680035572775534]
Deep Reinforcement Learning has emerged as an efficient dynamic obstacle avoidance method in highly dynamic environments.
The integration of Deep Reinforcement Learning into existing navigation systems is still an open frontier due to the myopic nature of Deep Reinforcement-Learning-based navigation.
arXiv Detail & Related papers (2021-04-08T10:23:23Z) - Towards Deployment of Deep-Reinforcement-Learning-Based Obstacle
Avoidance into Conventional Autonomous Navigation Systems [10.349425078806751]
Deep reinforcement learning emerged as an alternative planning method to replace overly conservative approaches.
Deep reinforcement learning approaches are not suitable for long-range navigation due to their proneness to local minima.
In this paper, we propose a navigation system incorporating deep-reinforcement-learning-based local planners into conventional navigation stacks for long-range navigation.
arXiv Detail & Related papers (2021-04-08T08:56:53Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z) - Safety Concerns and Mitigation Approaches Regarding the Use of Deep
Learning in Safety-Critical Perception Tasks [0.0]
The main reasons for deep learning not being used for autonomous agents at large scale already are safety concerns.
Deep learning approaches typically exhibit a black-box behavior which makes it hard to be evaluated with respect to safety-critical aspects.
We present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.
arXiv Detail & Related papers (2020-01-22T13:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.