LiloDriver: A Lifelong Learning Framework for Closed-loop Motion Planning in Long-tail Autonomous Driving Scenarios
- URL: http://arxiv.org/abs/2505.17209v1
- Date: Thu, 22 May 2025 18:33:08 GMT
- Title: LiloDriver: A Lifelong Learning Framework for Closed-loop Motion Planning in Long-tail Autonomous Driving Scenarios
- Authors: Huaiyuan Yao, Pengfei Li, Bu Jin, Yupeng Zheng, An Liu, Lisen Mu, Qing Su, Qian Zhang, Yilun Chen, Peng Li,
- Abstract summary: LiloDriver is a lifelong learning framework for closed-loop motion planning in long-tail autonomous driving scenarios.<n>It features a four-stage architecture including perception, scene encoding, memory-based strategy refinement, and LLM-guided reasoning.<n>Our results highlight the effectiveness of combining structured memory and LLM reasoning to enable scalable, human-like motion planning in real-world autonomous driving.
- Score: 23.913788819453796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in autonomous driving research towards motion planners that are robust, safe, and adaptive. However, existing rule-based and data-driven planners lack adaptability to long-tail scenarios, while knowledge-driven methods offer strong reasoning but face challenges in representation, control, and real-world evaluation. To address these challenges, we present LiloDriver, a lifelong learning framework for closed-loop motion planning in long-tail autonomous driving scenarios. By integrating large language models (LLMs) with a memory-augmented planner generation system, LiloDriver continuously adapts to new scenarios without retraining. It features a four-stage architecture including perception, scene encoding, memory-based strategy refinement, and LLM-guided reasoning. Evaluated on the nuPlan benchmark, LiloDriver achieves superior performance in both common and rare driving scenarios, outperforming static rule-based and learning-based planners. Our results highlight the effectiveness of combining structured memory and LLM reasoning to enable scalable, human-like motion planning in real-world autonomous driving. Our code is available at https://github.com/Hyan-Yao/LiloDriver.
Related papers
- Plan-R1: Safe and Feasible Trajectory Planning as Language Modeling [75.83583076519311]
Plan-R1 is a novel two-stage trajectory planning framework that formulates trajectory planning as a sequential prediction task.<n>In the first stage, we train an autoregressive trajectory predictor via next motion token prediction on expert data.<n>In the second stage, we design rule-based rewards (e.g., collision avoidance, speed limits) and fine-tune the model using Group Relative Policy Optimization.
arXiv Detail & Related papers (2025-05-23T09:22:19Z) - Diffusion-Based Planning for Autonomous Driving with Flexible Guidance [19.204115959760788]
We propose a novel transformer-based Diffusion Planner for closed-loop planning.<n>Our model supports joint modeling of both prediction and planning tasks.<n>It achieves state-of-the-art closed-loop performance with robust transferability in diverse driving styles.
arXiv Detail & Related papers (2025-01-26T15:49:50Z) - Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised learning approach using the LAtent World model (LAW) for end-to-end driving.<n> LAW predicts future scene features based on current features and ego trajectories.<n>This self-supervised task can be seamlessly integrated into perception-free and perception-based frameworks.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Can Vehicle Motion Planning Generalize to Realistic Long-tail Scenarios? [11.917542484123134]
Real-world autonomous driving systems must make safe decisions in the face of rare and diverse traffic scenarios.
Current state-of-the-art planners are mostly evaluated on real-world datasets like nuScenes (open-loop) or nuPlan (closed-loop)
arXiv Detail & Related papers (2024-04-11T08:57:48Z) - VLP: Vision Language Planning for Autonomous Driving [52.640371249017335]
This paper presents a novel Vision-Language-Planning framework that exploits language models to bridge the gap between linguistic understanding and autonomous driving.
It achieves state-of-the-art end-to-end planning performance on the NuScenes dataset by achieving 35.9% and 60.5% reduction in terms of average L2 error and collision rates, respectively.
arXiv Detail & Related papers (2024-01-10T23:00:40Z) - LLM-Assist: Enhancing Closed-Loop Planning with Language-Based Reasoning [65.86754998249224]
We develop a novel hybrid planner that leverages a conventional rule-based planner in conjunction with an LLM-based planner.
Our approach navigates complex scenarios which existing planners struggle with, produces well-reasoned outputs while also remaining grounded through working alongside the rule-based approach.
arXiv Detail & Related papers (2023-12-30T02:53:45Z) - GPT-Driver: Learning to Drive with GPT [47.14350537515685]
We present a simple yet effective approach that can transform the OpenAI GPT-3.5 model into a reliable motion planner for autonomous vehicles.
We capitalize on the strong reasoning capabilities and generalization potential inherent to Large Language Models (LLMs)
We evaluate our approach on the large-scale nuScenes dataset, and extensive experiments substantiate the effectiveness, generalization ability, and interpretability of our GPT-based motion planner.
arXiv Detail & Related papers (2023-10-02T17:59:57Z) - Interpretable Motion Planner for Urban Driving via Hierarchical
Imitation Learning [5.280496662905411]
We introduce a hierarchical planning architecture including a high-level grid-based behavior planner and a low-level trajectory planner.
As the high-level planner is responsible for finding a consistent route, the low-level planner generates a feasible trajectory.
We evaluate our method both in closed-loop simulation and real world driving, and demonstrate the neural network planner has outstanding performance.
arXiv Detail & Related papers (2023-03-24T13:18:40Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.