EnsembleFollower: A Hybrid Car-Following Framework Based On
Reinforcement Learning and Hierarchical Planning
- URL: http://arxiv.org/abs/2308.16008v1
- Date: Wed, 30 Aug 2023 12:55:02 GMT
- Title: EnsembleFollower: A Hybrid Car-Following Framework Based On
Reinforcement Learning and Hierarchical Planning
- Authors: Xu Han, Xianda Chen, Meixin Zhu, Pinlong Cai, Jianshan Zhou, Xiaowen
Chu
- Abstract summary: We propose a hierarchical planning framework for achieving advanced human-like car-following.
The EnsembleFollower framework involves a high-level Reinforcement Learning-based agent responsible for judiciously managing multiple low-level car-following models.
We evaluate the proposed method based on real-world driving data from the HighD dataset.
- Score: 22.63087292154406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Car-following models have made significant contributions to our understanding
of longitudinal driving behavior. However, they often exhibit limited accuracy
and flexibility, as they cannot fully capture the complexity inherent in
car-following processes, or may falter in unseen scenarios due to their
reliance on confined driving skills present in training data. It is worth
noting that each car-following model possesses its own strengths and weaknesses
depending on specific driving scenarios. Therefore, we propose
EnsembleFollower, a hierarchical planning framework for achieving advanced
human-like car-following. The EnsembleFollower framework involves a high-level
Reinforcement Learning-based agent responsible for judiciously managing
multiple low-level car-following models according to the current state, either
by selecting an appropriate low-level model to perform an action or by
allocating different weights across all low-level components. Moreover, we
propose a jerk-constrained kinematic model for more convincing car-following
simulations. We evaluate the proposed method based on real-world driving data
from the HighD dataset. The experimental results illustrate that
EnsembleFollower yields improved accuracy of human-like behavior and achieves
effectiveness in combining hybrid models, demonstrating that our proposed
framework can handle diverse car-following conditions by leveraging the
strengths of various low-level models.
Related papers
- TeLL-Drive: Enhancing Autonomous Driving with Teacher LLM-Guided Deep Reinforcement Learning [61.33599727106222]
TeLL-Drive is a hybrid framework that integrates a Teacher LLM to guide an attention-based Student DRL policy.
A self-attention mechanism then fuses these strategies with the DRL agent's exploration, accelerating policy convergence and boosting robustness.
arXiv Detail & Related papers (2025-02-03T14:22:03Z) - Diffusion-Based Planning for Autonomous Driving with Flexible Guidance [19.204115959760788]
We propose a novel transformer-based Diffusion Planner for closed-loop planning.
Our model supports joint modeling of both prediction and planning tasks.
It achieves state-of-the-art closed-loop performance with robust transferability in diverse driving styles.
arXiv Detail & Related papers (2025-01-26T15:49:50Z) - DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers [61.92571851411509]
We introduce a multimodal driving language based on interleaved image and action tokens, and develop DrivingGPT to learn joint world modeling and planning.
Our DrivingGPT demonstrates strong performance in both action-conditioned video generation and end-to-end planning, outperforming strong baselines on large-scale nuPlan and NAVSIM benchmarks.
arXiv Detail & Related papers (2024-12-24T18:59:37Z) - DrivingDojo Dataset: Advancing Interactive and Knowledge-Enriched Driving World Model [65.43473733967038]
We introduce DrivingDojo, the first dataset tailor-made for training interactive world models with complex driving dynamics.
Our dataset features video clips with a complete set of driving maneuvers, diverse multi-agent interplay, and rich open-world driving knowledge.
arXiv Detail & Related papers (2024-10-14T17:19:23Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - EditFollower: Tunable Car Following Models for Customizable Adaptive Cruise Control Systems [28.263763430300504]
We propose a data-driven car-following model that allows for adjusting driving discourtesy levels.
Our model provides valuable insights for the development of ACC systems that take into account drivers' social preferences.
arXiv Detail & Related papers (2024-06-23T15:04:07Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Formulation and validation of a car-following model based on deep
reinforcement learning [0.0]
We propose and validate a novel car following model based on deep reinforcement learning.
Our model is trained to maximize externally given reward functions for the free and car-following regimes.
The parameters of these reward functions resemble that of traditional models such as the Intelligent Driver Model.
arXiv Detail & Related papers (2021-09-29T08:27:12Z) - Solution Concepts in Hierarchical Games under Bounded Rationality with Applications to Autonomous Driving [8.500525426182115]
We create game theoretic models of driving behaviour using hierarchical games.
We evaluate the behaviour models on the basis of model fit to naturalistic data, as well as their predictive capacity.
Our results suggest that among the behaviour models evaluated, at the level of maneuvers, modeling driving behaviour as an adaptation of the Quantal level-k model with level-0 behaviour modelled as pure rule-following provides the best fit to naturalistic driving behaviour.
arXiv Detail & Related papers (2020-09-21T17:13:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.