PriorFusion: Unified Integration of Priors for Robust Road Perception in Autonomous Driving
- URL: http://arxiv.org/abs/2507.23309v2
- Date: Mon, 04 Aug 2025 05:19:02 GMT
- Title: PriorFusion: Unified Integration of Priors for Robust Road Perception in Autonomous Driving
- Authors: Xuewei Tang, Mengmeng Yang, Tuopu Wen, Peijin Jia, Le Cui, Mingshang Luo, Kehua Sheng, Bo Zhang, Diange Yang, Kun Jiang,
- Abstract summary: We propose PriorFusion, a unified framework that integrates semantic, geometric, and generative priors to enhance road element perception.<n>We show that our method significantly improves perception accuracy, particularly under challenging conditions.
- Score: 12.699352594544166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing interest in autonomous driving, there is an increasing demand for accurate and reliable road perception technologies. In complex environments without high-definition map support, autonomous vehicles must independently interpret their surroundings to ensure safe and robust decision-making. However, these scenarios pose significant challenges due to the large number, complex geometries, and frequent occlusions of road elements. A key limitation of existing approaches lies in their insufficient exploitation of the structured priors inherently present in road elements, resulting in irregular, inaccurate predictions. To address this, we propose PriorFusion, a unified framework that effectively integrates semantic, geometric, and generative priors to enhance road element perception. We introduce an instance-aware attention mechanism guided by shape-prior features, then construct a data-driven shape template space that encodes low-dimensional representations of road elements, enabling clustering to generate anchor points as reference priors. We design a diffusion-based framework that leverages these prior anchors to generate accurate and complete predictions. Experiments on large-scale autonomous driving datasets demonstrate that our method significantly improves perception accuracy, particularly under challenging conditions. Visualization results further confirm that our approach produces more accurate, regular, and coherent predictions of road elements.
Related papers
- LANet: A Lane Boundaries-Aware Approach For Robust Trajectory Prediction [4.096453902709292]
We propose an enhanced motion forecasting model informed by multiple vector map elements, including lane boundaries and road edges.<n>An effective feature fusion strategy is developed to merge information in different vector map components, where the model learns holistic information on road structures.<n>Our method provides a more informative and efficient representation of the driving environment and advances the state of the art for autonomous vehicle motion forecasting.
arXiv Detail & Related papers (2025-07-02T02:49:24Z) - Enhancing Lane Segment Perception and Topology Reasoning with Crowdsourcing Trajectory Priors [12.333249510969289]
We investigate prior augmentation from a novel perspective of trajectory priors.<n>We design a confidence-based fusion module that takes alignment into account during the fusion process.<n>Results indicate that our method's performance significantly outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2024-11-26T07:05:05Z) - QuAD: Query-based Interpretable Neural Motion Planning for Autonomous Driving [33.609780917199394]
Self-driving vehicles must understand its environment to determine appropriate action.
Traditional systems rely on object detection to find agents in the scene.
We present a unified, interpretable, and efficient autonomy framework that moves away from cascading modules that first perceive occupancy relevant-temporal autonomy.
arXiv Detail & Related papers (2024-04-01T21:11:43Z) - Leveraging Driver Field-of-View for Multimodal Ego-Trajectory Prediction [69.29802752614677]
RouteFormer is a novel ego-trajectory prediction network combining GPS data, environmental context, and the driver's field-of-view.<n>To tackle data scarcity and enhance diversity, we introduce GEM, a dataset of urban driving scenarios enriched with synchronized driver field-of-view and gaze data.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - KI-PMF: Knowledge Integrated Plausible Motion Forecasting [11.311561045938546]
Current trajectory forecasting approaches primarily concentrate on optimizing a loss function with a specific metric.
Our objective is to incorporate explicit knowledge priors that allow a network to forecast future trajectories in compliance with both the kinematic constraints of a vehicle.
Our proposed method is designed to ensure reachability guarantees for traffic actors in both complex and dynamic situations.
arXiv Detail & Related papers (2023-10-18T14:40:52Z) - SEPT: Towards Efficient Scene Representation Learning for Motion
Prediction [19.111948522155004]
This paper presents SEPT, a modeling framework that leverages self-supervised learning to develop powerful models for complex traffic scenes.
experiments demonstrate that SEPT, without elaborate architectural design or feature engineering, achieves state-of-the-art performance on the Argoverse 1 and Argoverse 2 motion forecasting benchmarks.
arXiv Detail & Related papers (2023-09-26T21:56:03Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Decentralized Vehicle Coordination: The Berkeley DeepDrive Drone Dataset and Consensus-Based Models [76.32775745488073]
We present a novel dataset and modeling framework designed to study motion planning in understructured environments.<n>We demonstrate that a consensus-based modeling approach can effectively explain the emergence of priority orders observed in our dataset.
arXiv Detail & Related papers (2022-09-19T05:06:57Z) - Open-set Intersection Intention Prediction for Autonomous Driving [9.494867137826397]
We formulate the prediction of intention at intersections as an open-set prediction problem.
We capture map-centric features that correspond to intersection structures under a spatial-temporal graph representation.
We use two MAAMs (mutually auxiliary attention module) to predict a target that best matches intersection elements in map-centric feature space.
arXiv Detail & Related papers (2021-02-27T06:38:26Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.