Reinforced Imitative Trajectory Planning for Urban Automated Driving
- URL: http://arxiv.org/abs/2410.15607v1
- Date: Mon, 21 Oct 2024 03:04:29 GMT
- Title: Reinforced Imitative Trajectory Planning for Urban Automated Driving
- Authors: Di Zeng, Ling Zheng, Xiantong Yang, Yinong Li,
- Abstract summary: This paper proposes a novel RL-based trajectory planning method that integrates RL with imitation learning to enable multi-step planning.
A transformer-based Bayesian reward function is developed, providing effective reward signals for RL in urban scenarios.
The proposed methods were validated on the large-scale real-world urban automated driving nuPlan dataset.
- Score: 3.2436298824947434
- License:
- Abstract: Reinforcement learning (RL) faces challenges in trajectory planning for urban automated driving due to the poor convergence of RL and the difficulty in designing reward functions. The convergence problem is alleviated by combining RL with supervised learning. However, most existing approaches only reason one step ahead and lack the capability to plan for multiple future steps. Besides, although inverse reinforcement learning holds promise for solving the reward function design issue, existing methods for automated driving impose a linear structure assumption on reward functions, making them difficult to apply to urban automated driving. In light of these challenges, this paper proposes a novel RL-based trajectory planning method that integrates RL with imitation learning to enable multi-step planning. Furthermore, a transformer-based Bayesian reward function is developed, providing effective reward signals for RL in urban scenarios. Moreover, a hybrid-driven trajectory planning framework is proposed to enhance safety and interpretability. The proposed methods were validated on the large-scale real-world urban automated driving nuPlan dataset. The results demonstrated the significant superiority of the proposed methods over the baselines in terms of the closed-loop metrics. The code is available at https://github.com/Zigned/nuplan_zigned.
Related papers
- Navigation with QPHIL: Quantizing Planner for Hierarchical Implicit Q-Learning [17.760679318994384]
We present a novel hierarchical transformer-based approach leveraging a learned quantizer of the space.
This quantization enables the training of a simpler zone-conditioned low-level policy and simplifies planning.
Our proposed approach achieves state-of-the-art results in complex long-distance navigation environments.
arXiv Detail & Related papers (2024-11-12T12:49:41Z) - ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - Let Hybrid A* Path Planner Obey Traffic Rules: A Deep Reinforcement Learning-Based Planning Framework [0.0]
In this work, we combine low-level algorithms such as the hybrid A* path planning with deep reinforcement learning (DRL) to make high-level decisions.
The hybrid A* planner is able to generate a collision-free trajectory to be executed by a model predictive controller (MPC)
In addition, the DRL algorithm is able to keep the lane change command consistent within a chosen time-period.
arXiv Detail & Related papers (2024-07-01T12:00:10Z) - LLM-Assist: Enhancing Closed-Loop Planning with Language-Based Reasoning [65.86754998249224]
We develop a novel hybrid planner that leverages a conventional rule-based planner in conjunction with an LLM-based planner.
Our approach navigates complex scenarios which existing planners struggle with, produces well-reasoned outputs while also remaining grounded through working alongside the rule-based approach.
arXiv Detail & Related papers (2023-12-30T02:53:45Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Action and Trajectory Planning for Urban Autonomous Driving with
Hierarchical Reinforcement Learning [1.3397650653650457]
We propose an action and trajectory planner using Hierarchical Reinforcement Learning (atHRL) method.
We empirically verify the efficacy of atHRL through extensive experiments in complex urban driving scenarios.
arXiv Detail & Related papers (2023-06-28T07:11:02Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Trajectory Planning for Autonomous Vehicles Using Hierarchical
Reinforcement Learning [21.500697097095408]
Planning safe trajectories under uncertain and dynamic conditions makes the autonomous driving problem significantly complex.
Current sampling-based methods such as Rapidly Exploring Random Trees (RRTs) are not ideal for this problem because of the high computational cost.
We propose a Hierarchical Reinforcement Learning structure combined with a Proportional-Integral-Derivative (PID) controller for trajectory planning.
arXiv Detail & Related papers (2020-11-09T20:49:54Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z) - Integrating Deep Reinforcement Learning with Model-based Path Planners
for Automated Driving [0.0]
We propose a hybrid approach for integrating a path planning pipe into a vision based DRL framework.
In summary, the DRL agent is trained to follow the path planner's waypoints as close as possible.
Experimental results show that the proposed method can plan its path and navigate between randomly chosen origin-destination points.
arXiv Detail & Related papers (2020-02-02T17:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.