Reinforcement Learning for Autonomous Driving with Latent State
Inference and Spatial-Temporal Relationships
- URL: http://arxiv.org/abs/2011.04251v2
- Date: Wed, 24 Mar 2021 17:33:45 GMT
- Title: Reinforcement Learning for Autonomous Driving with Latent State
Inference and Spatial-Temporal Relationships
- Authors: Xiaobai Ma, Jiachen Li, Mykel J. Kochenderfer, David Isele, Kikuo
Fujimura
- Abstract summary: We show that explicitly inferring the latent state and encoding spatial-temporal relationships in a reinforcement learning framework can help address this difficulty.
We encode prior knowledge on the latent states of other drivers through a framework that combines the reinforcement learner with a supervised learner.
The proposed framework significantly improves performance in the context of navigating T-intersections compared with state-of-the-art baseline approaches.
- Score: 46.965260791099986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) provides a promising way for learning
navigation in complex autonomous driving scenarios. However, identifying the
subtle cues that can indicate drastically different outcomes remains an open
problem with designing autonomous systems that operate in human environments.
In this work, we show that explicitly inferring the latent state and encoding
spatial-temporal relationships in a reinforcement learning framework can help
address this difficulty. We encode prior knowledge on the latent states of
other drivers through a framework that combines the reinforcement learner with
a supervised learner. In addition, we model the influence passing between
different vehicles through graph neural networks (GNNs). The proposed framework
significantly improves performance in the context of navigating T-intersections
compared with state-of-the-art baseline approaches.
Related papers
- Deep Attention Driven Reinforcement Learning (DAD-RL) for Autonomous Decision-Making in Dynamic Environment [2.3575550107698016]
We introduce an AV centrictemporal attention encoding (STAE) mechanism for learning dynamic interactions with different surrounding vehicles.
To understand map and route context, we employ a context encoder to extract context maps.
The resulting model is trained using the Soft Actor Critic (SAC) algorithm.
arXiv Detail & Related papers (2024-07-12T02:34:44Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - GINK: Graph-based Interaction-aware Kinodynamic Planning via
Reinforcement Learning for Autonomous Driving [10.782043595405831]
There are many challenges in applying deep reinforcement learning (D) to autonomous driving in a structured environment such as an urban area.
In this paper, we suggest a new framework that effectively combines graph-based intention representation and reinforcement learning for dynamic planning.
The experiments show the state-of-the-art performance of our approach compared to the existing baselines.
arXiv Detail & Related papers (2022-06-03T10:37:25Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Affordance-based Reinforcement Learning for Urban Driving [3.507764811554557]
We propose a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations.
We demonstrate that our agents when trained from scratch learn the tasks of lane-following, driving around inter-sections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.
arXiv Detail & Related papers (2021-01-15T05:21:25Z) - Interpretable End-to-end Urban Autonomous Driving with Latent Deep
Reinforcement Learning [32.97789225998642]
We propose an interpretable deep reinforcement learning method for end-to-end autonomous driving.
A sequential latent environment model is introduced and learned jointly with the reinforcement learning process.
Our method is able to provide a better explanation of how the car reasons about the driving environment.
arXiv Detail & Related papers (2020-01-23T18:36:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.