ZTRS: Zero-Imitation End-to-end Autonomous Driving with Trajectory Scoring
- URL: http://arxiv.org/abs/2510.24108v1
- Date: Tue, 28 Oct 2025 06:26:36 GMT
- Title: ZTRS: Zero-Imitation End-to-end Autonomous Driving with Trajectory Scoring
- Authors: Zhenxin Li, Wenhao Yao, Zi Wang, Xinglong Sun, Jingde Chen, Nadine Chang, Maying Shen, Jingyu Song, Zuxuan Wu, Shiyi Lan, Jose M. Alvarez,
- Abstract summary: ZTRS (Zero-Imitation End-to-End Autonomous Driving with Trajectory Scoring) is a framework that combines the strengths of both worlds: sensor inputs without losing information and RL training for robust planning.<n>ZTRS demonstrates strong performance across three benchmarks: Navtest, Navhard, and HUGSIM.
- Score: 52.195295396336526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-end autonomous driving maps raw sensor inputs directly into ego-vehicle trajectories to avoid cascading errors from perception modules and to leverage rich semantic cues. Existing frameworks largely rely on Imitation Learning (IL), which can be limited by sub-optimal expert demonstrations and covariate shift during deployment. On the other hand, Reinforcement Learning (RL) has recently shown potential in scaling up with simulations, but is typically confined to low-dimensional symbolic inputs (e.g. 3D objects and maps), falling short of full end-to-end learning from raw sensor data. We introduce ZTRS (Zero-Imitation End-to-End Autonomous Driving with Trajectory Scoring), a framework that combines the strengths of both worlds: sensor inputs without losing information and RL training for robust planning. To the best of our knowledge, ZTRS is the first framework that eliminates IL entirely by only learning from rewards while operating directly on high-dimensional sensor data. ZTRS utilizes offline reinforcement learning with our proposed Exhaustive Policy Optimization (EPO), a variant of policy gradient tailored for enumerable actions and rewards. ZTRS demonstrates strong performance across three benchmarks: Navtest (generic real-world open-loop planning), Navhard (open-loop planning in challenging real-world and synthetic scenarios), and HUGSIM (simulated closed-loop driving). Specifically, ZTRS achieves the state-of-the-art result on Navhard and outperforms IL-based baselines on HUGSIM. Code will be available at https://github.com/woxihuanjiangguo/ZTRS.
Related papers
- Offline Reinforcement Learning for End-to-End Autonomous Driving [1.2891210250935148]
End-to-end (E2E) autonomous driving models take only camera images as input and directly predict a future trajectory.<n>Online reinforcement learning (RL) could mitigate IL-induced issues.<n>We introduce a camera-only E2E offline RL framework that performs no additional exploration and trains solely on a fixed simulator dataset.
arXiv Detail & Related papers (2025-12-21T09:21:04Z) - RAP: 3D Rasterization Augmented End-to-End Planning [104.52778241744522]
Imitation learning for end-to-end driving trains policies only on expert demonstrations.<n>We propose 3D Rasterization, which replaces costly rendering with lightweightization of annotated primitives.<n>RAP achieves state-of-the-art closed-loop and long-tail robustness, ranking first on four major benchmarks.
arXiv Detail & Related papers (2025-10-05T19:31:24Z) - Raw2Drive: Reinforcement Learning with Aligned World Models for End-to-End Autonomous Driving (in CARLA v2) [54.185249897842034]
Reinforcement Learning (RL) can mitigate the causal confusion and distribution shift inherent to imitation learning (IL)<n>Applying RL to end-to-end autonomous driving (E2E-AD) remains an open problem for its training difficulty.
arXiv Detail & Related papers (2025-05-22T08:46:53Z) - RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning [54.52545900359868]
We propose RAD, a 3DGS-based closed-loop Reinforcement Learning framework for end-to-end Autonomous Driving.<n>To enhance safety, we design specialized rewards to guide the policy in effectively responding to safety-critical events and understanding real-world causal relationships.<n>Compared to IL-based methods, RAD achieves stronger performance in most closed-loop metrics, particularly exhibiting a 3x lower collision rate.
arXiv Detail & Related papers (2025-02-18T18:59:21Z) - Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised learning approach using the LAtent World model (LAW) for end-to-end driving.<n> LAW predicts future scene features based on current features and ego trajectories.<n>This self-supervised task can be seamlessly integrated into perception-free and perception-based frameworks.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Lite-HDSeg: LiDAR Semantic Segmentation Using Lite Harmonic Dense
Convolutions [2.099922236065961]
We present Lite-HDSeg, a novel real-time convolutional neural network for semantic segmentation of full $3$D LiDAR point clouds.
Our experimental results show that the proposed method outperforms state-of-the-art semantic segmentation approaches which can run real-time.
arXiv Detail & Related papers (2021-03-16T04:54:57Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.