Robust Lane Detection via Expanded Self Attention
- URL: http://arxiv.org/abs/2102.07037v1
- Date: Sun, 14 Feb 2021 00:29:55 GMT
- Title: Robust Lane Detection via Expanded Self Attention
- Authors: Minhyeok Lee, Junhyeop Lee, Dogyoon Lee, Woojin Kim, Sangwon Hwang,
Sangyoun Lee
- Abstract summary: We propose Expanded Self Attention (ESA) module for lane detection.
The proposed method predicts the confidence of a lane along the vertical and horizontal directions in an image.
We achieve state-of-the-art performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
- Score: 3.616997653625528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The image-based lane detection algorithm is one of the key technologies in
autonomous vehicles. Modern deep learning methods achieve high performance in
lane detection, but it is still difficult to accurately detect lanes in
challenging situations such as congested roads and extreme lighting conditions.
To be robust on these challenging situations, it is important to extract global
contextual information even from limited visual cues. In this paper, we propose
a simple but powerful self-attention mechanism optimized for lane detection
called the Expanded Self Attention (ESA) module. Inspired by the simple
geometric structure of lanes, the proposed method predicts the confidence of a
lane along the vertical and horizontal directions in an image. The prediction
of the confidence enables estimating occluded locations by extracting global
contextual information. ESA module can be easily implemented and applied to any
encoder-decoder-based model without increasing the inference time. The
performance of our method is evaluated on three popular lane detection
benchmarks (TuSimple, CULane and BDD100K). We achieve state-of-the-art
performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
The experimental results show that our approach is robust to occlusion and
extreme lighting conditions.
Related papers
- Annotation-Free Curb Detection Leveraging Altitude Difference Image [9.799565515089617]
Road curbs are essential for ensuring the safety of autonomous vehicles.
Current methods for detecting curbs rely on camera imagery or LiDAR point clouds.
This work proposes an annotation-free curb detection method leveraging Altitude Difference Image (ADI)
arXiv Detail & Related papers (2024-09-30T10:29:41Z) - Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - LVLane: Deep Learning for Lane Detection and Classification in
Challenging Conditions [2.5641096293146712]
We present an end-to-end lane detection and classification system based on deep learning methodologies.
In our study, we introduce a unique dataset meticulously curated to encompass scenarios that pose significant challenges for state-of-the-art (SOTA) lane localization models.
We propose a CNN-based classification branch, seamlessly integrated with the detector, facilitating the identification of distinct lane types.
arXiv Detail & Related papers (2023-07-13T16:09:53Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - End-to-end Lane Shape Prediction with Transformers [13.103463647059634]
Lane detection is widely used for lane departure warning and adaptive cruise control in autonomous vehicles.
We propose an end-to-end method that directly outputs parameters of a lane shape model.
The proposed method is validated on the TuSimple benchmark and shows state-of-the-art accuracy with the most lightweight model size and fastest speed.
arXiv Detail & Related papers (2020-11-09T07:42:55Z) - CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive
Point Blending [102.98909328368481]
CurveLane-NAS is a novel lane-sensitive architecture search framework.
It captures both long-ranged coherent and accurate short-range curve information.
It unifies both architecture search and post-processing on curve lane predictions via point blending.
arXiv Detail & Related papers (2020-07-23T17:23:26Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Ultra Fast Structure-aware Deep Lane Detection [15.738757958826998]
We propose a novel, simple, yet effective formulation aiming at extremely fast speed and challenging scenarios.
We treat the process of lane detection as a row-based selecting problem using global features.
Our method could achieve the state-of-the-art performance in terms of both speed and accuracy.
arXiv Detail & Related papers (2020-04-24T13:58:49Z) - Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios [26.016292792373815]
This paper exploits prior knowledge contained in digital maps, which has a strong capability to enhance the performance of detection algorithms.
In this way, only a few lane features are needed to eliminate the position error between the road shape and the real lane.
Experiments show that the proposed method can be applied to various scenarios and can run in real-time at a frequency of 20 Hz.
arXiv Detail & Related papers (2020-04-02T16:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.