Robust Lane Detection with Wavelet-Enhanced Context Modeling and Adaptive Sampling
- URL: http://arxiv.org/abs/2503.18631v1
- Date: Mon, 24 Mar 2025 12:49:47 GMT
- Title: Robust Lane Detection with Wavelet-Enhanced Context Modeling and Adaptive Sampling
- Authors: Kunyang Li, Ming Hou,
- Abstract summary: Lane detection is critical for autonomous driving and driver assistance systems.<n>We propose a Wavelet-Enhanced Feature Pyramid Net-work to address these challenges.<n> Experiments on CULane and TuSimple demonstrate that our approach signiffcantly outperforms baselines in challenging scenarios.
- Score: 2.453824332203939
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Lane detection is critical for autonomous driving and ad-vanced driver assistance systems (ADAS). While recent methods like CLRNet achieve strong performance, they struggle under adverse con-ditions such as extreme weather, illumination changes, occlusions, and complex curves. We propose a Wavelet-Enhanced Feature Pyramid Net-work (WE-FPN) to address these challenges. A wavelet-based non-local block is integrated before the feature pyramid to improve global context modeling, especially for occluded and curved lanes. Additionally, we de-sign an adaptive preprocessing module to enhance lane visibility under poor lighting. An attention-guided sampling strategy further reffnes spa-tial features, boosting accuracy on distant and curved lanes. Experiments on CULane and TuSimple demonstrate that our approach signiffcantly outperforms baselines in challenging scenarios, achieving better robust-ness and accuracy in real-world driving conditions.
Related papers
- SuperFlow++: Enhanced Spatiotemporal Consistency for Cross-Modal Data Pretraining [62.433137130087445]
SuperFlow++ is a novel framework that integrates pretraining and downstream tasks using consecutive camera pairs.
We show that SuperFlow++ outperforms state-of-the-art methods across diverse tasks and driving conditions.
With strong generalizability and computational efficiency, SuperFlow++ establishes a new benchmark for data-efficient LiDAR-based perception in autonomous driving.
arXiv Detail & Related papers (2025-03-25T17:59:57Z) - Enhancing autonomous vehicle safety in rain: a data-centric approach for clear vision [0.0]
We developed a vision model that processes live vehicle camera feeds to eliminate rain-induced visual hindrances.
We employed a classic encoder-decoder architecture with skip connections and concatenation operations.
The results demonstrated notable improvements in steering accuracy, underscoring the model's potential to enhance navigation safety and reliability in rainy weather conditions.
arXiv Detail & Related papers (2024-12-29T20:27:12Z) - Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised learning approach using the LAtent World model (LAW) for end-to-end driving.<n> LAW predicts future scene features based on current features and ego trajectories.<n>This self-supervised task can be seamlessly integrated into perception-free and perception-based frameworks.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - LDTR: Transformer-based Lane Detection with Anchor-chain Representation [11.184960972042406]
Lane detection scenarios with limited- or no-visual-clue of lanes remain challenging and crucial for automated driving.
Inspired by the DETR architecture, we propose LDTR, a transformer-based model to address these issues.
Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.
arXiv Detail & Related papers (2024-03-21T12:29:26Z) - Leveraging Driver Field-of-View for Multimodal Ego-Trajectory Prediction [69.29802752614677]
RouteFormer is a novel ego-trajectory prediction network combining GPS data, environmental context, and the driver's field-of-view.
To tackle data scarcity and enhance diversity, we introduce GEM, a dataset of urban driving scenarios enriched with synchronized driver field-of-view and gaze data.
arXiv Detail & Related papers (2023-12-13T23:06:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Rethinking Efficient Lane Detection via Curve Modeling [37.45243848960598]
The proposed method achieves a new state-of-the-art performance on the popular LLAMAS benchmark.
It also achieves favorable accuracy on the TuSimple and CU datasets, while retaining both low latency (> 150 FPS) and small model size ( 10M)
arXiv Detail & Related papers (2022-03-04T17:00:33Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Robust Lane Detection via Expanded Self Attention [3.616997653625528]
We propose Expanded Self Attention (ESA) module for lane detection.
The proposed method predicts the confidence of a lane along the vertical and horizontal directions in an image.
We achieve state-of-the-art performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
arXiv Detail & Related papers (2021-02-14T00:29:55Z) - End-to-end Lane Shape Prediction with Transformers [13.103463647059634]
Lane detection is widely used for lane departure warning and adaptive cruise control in autonomous vehicles.
We propose an end-to-end method that directly outputs parameters of a lane shape model.
The proposed method is validated on the TuSimple benchmark and shows state-of-the-art accuracy with the most lightweight model size and fastest speed.
arXiv Detail & Related papers (2020-11-09T07:42:55Z) - CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive
Point Blending [102.98909328368481]
CurveLane-NAS is a novel lane-sensitive architecture search framework.
It captures both long-ranged coherent and accurate short-range curve information.
It unifies both architecture search and post-processing on curve lane predictions via point blending.
arXiv Detail & Related papers (2020-07-23T17:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.