DriveDPO: Policy Learning via Safety DPO For End-to-End Autonomous Driving
- URL: http://arxiv.org/abs/2509.17940v1
- Date: Mon, 22 Sep 2025 16:01:11 GMT
- Title: DriveDPO: Policy Learning via Safety DPO For End-to-End Autonomous Driving
- Authors: Shuyao Shang, Yuntao Chen, Yuqi Wang, Yingyan Li, Zhaoxiang Zhang,
- Abstract summary: DriveDPO is a Safety Direct Preference Optimization Policy Learning framework.<n>We distill a unified policy distribution from human imitation similarity and rule-based safety scores for direct policy optimization.<n>Experiments on the NAVSIM benchmark demonstrate that DriveDPO achieves a new state-of-the-art PDMS of 90.0.
- Score: 31.336758241051374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: End-to-end autonomous driving has substantially progressed by directly predicting future trajectories from raw perception inputs, which bypasses traditional modular pipelines. However, mainstream methods trained via imitation learning suffer from critical safety limitations, as they fail to distinguish between trajectories that appear human-like but are potentially unsafe. Some recent approaches attempt to address this by regressing multiple rule-driven scores but decoupling supervision from policy optimization, resulting in suboptimal performance. To tackle these challenges, we propose DriveDPO, a Safety Direct Preference Optimization Policy Learning framework. First, we distill a unified policy distribution from human imitation similarity and rule-based safety scores for direct policy optimization. Further, we introduce an iterative Direct Preference Optimization stage formulated as trajectory-level preference alignment. Extensive experiments on the NAVSIM benchmark demonstrate that DriveDPO achieves a new state-of-the-art PDMS of 90.0. Furthermore, qualitative results across diverse challenging scenarios highlight DriveDPO's ability to produce safer and more reliable driving behaviors.
Related papers
- RAPiD: Real-time Deterministic Trajectory Planning via Diffusion Behavior Priors for Safe and Efficient Autonomous Driving [5.030754278104693]
RAPiD is a deterministic policy extraction framework that distills a pretrained diffusion-based planner into an efficient policy.<n>To promote safety and passenger comfort, the policy is optimized using a critic trained to imitate a predictive driver controller.
arXiv Detail & Related papers (2026-02-07T03:44:50Z) - TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data [40.3157492247442]
Existing end-to-end autonomous driving methods typically rely on imitation learning (IL)<n>This misalignment often triggers driver-initiated takeovers and system disengagements during closed-loop execution.<n>We propose TakeAD, a preference-based post-optimization framework that fine-tunes the pre-trained IL policy with this disengagement data.
arXiv Detail & Related papers (2025-12-19T09:12:44Z) - Model-Based Policy Adaptation for Closed-Loop End-to-End Autonomous Driving [54.46325690390831]
We propose Model-based Policy Adaptation (MPA), a general framework that enhances the robustness and safety of pretrained E2E driving agents during deployment.<n>MPA first generates diverse counterfactual trajectories using a geometry-consistent simulation engine.<n>MPA trains a diffusion-based policy adapter to refine the base policy's predictions and a multi-step Q value model to evaluate long-term outcomes.
arXiv Detail & Related papers (2025-11-26T17:01:41Z) - EXPO: Stable Reinforcement Learning with Expressive Policies [74.30151915786233]
We propose a sample-efficient online reinforcement learning algorithm to maximize value with two parameterized policies.<n>Our approach yields up to 2-3x improvement in sample efficiency on average over prior methods.
arXiv Detail & Related papers (2025-07-10T17:57:46Z) - POLAR: A Pessimistic Model-based Policy Learning Algorithm for Dynamic Treatment Regimes [15.681058679765277]
We propose POLAR, a pessimistic model-based policy learning algorithm for offline dynamic treatment regimes (DTRs)<n> POLAR estimates the transition dynamics from offline data and quantifies uncertainty for each history-action pair.<n>Unlike many existing methods that focus on average training performance, POLAR directly targets the suboptimality of the final learned policy and offers theoretical guarantees.<n> Empirical results on both synthetic data and the MIMIC-III dataset demonstrate that POLAR outperforms state-of-the-art methods and yields near-optimal, history-aware treatment strategies.
arXiv Detail & Related papers (2025-06-25T13:22:57Z) - DriveSuprim: Towards Precise Trajectory Selection for End-to-End Planning [43.284391163049236]
DriveSuprim is a selection-based paradigm for trajectory selection in autonomous vehicles.<n>It achieves state-of-the-art performance, including collision avoidance and compliance with rules.<n>It maintains high trajectory quality in various driving scenarios.
arXiv Detail & Related papers (2025-06-07T04:39:06Z) - Plan-R1: Safe and Feasible Trajectory Planning as Language Modeling [74.41886258801209]
We propose a two-stage trajectory planning framework that decouples principle alignment from behavior learning.<n>Plan-R1 significantly improves planning safety and feasibility, achieving state-of-the-art performance.
arXiv Detail & Related papers (2025-05-23T09:22:19Z) - Efficient Safety Alignment of Large Language Models via Preference Re-ranking and Representation-based Reward Modeling [84.00480999255628]
Reinforcement Learning algorithms for safety alignment of Large Language Models (LLMs) encounter the challenge of distribution shift.<n>Current approaches typically address this issue through online sampling from the target policy.<n>We propose a new framework that leverages the model's intrinsic safety judgment capability to extract reward signals.
arXiv Detail & Related papers (2025-03-13T06:40:34Z) - Enhanced Safety in Autonomous Driving: Integrating Latent State Diffusion Model for End-to-End Navigation [5.928213664340974]
This research addresses the safety issue in the control optimization problem of autonomous driving.
We propose a novel, model-based approach for policy optimization, utilizing a conditional Value-at-Risk based Soft Actor Critic.
Our method introduces a worst-case actor to guide safe exploration, ensuring rigorous adherence to safety requirements even in unpredictable scenarios.
arXiv Detail & Related papers (2024-07-08T18:32:40Z) - Stable Policy Optimization via Off-Policy Divergence Regularization [50.98542111236381]
Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) are among the most successful policy gradient approaches in deep reinforcement learning (RL)
We propose a new algorithm which stabilizes the policy improvement through a proximity term that constrains the discounted state-action visitation distribution induced by consecutive policies to be close to one another.
Our proposed method can have a beneficial effect on stability and improve final performance in benchmark high-dimensional control tasks.
arXiv Detail & Related papers (2020-03-09T13:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.