Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal
Classification
- URL: http://arxiv.org/abs/2206.07389v1
- Date: Wed, 15 Jun 2022 08:53:02 GMT
- Title: Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal
Classification
- Authors: Zequn Qin, Pengyi Zhang, Xi Li
- Abstract summary: We propose a novel, simple, yet effective formulation aiming at ultra fast speed and the problem of challenging scenarios.
Specifically, we treat the process of lane detection as an anchor-driven ordinal classification problem using global features.
Our method could achieve state-of-the-art performance in terms of both speed and accuracy.
- Score: 13.735482211178931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern methods mainly regard lane detection as a problem of pixel-wise
segmentation, which is struggling to address the problems of efficiency and
challenging scenarios like severe occlusions and extreme lighting conditions.
Inspired by human perception, the recognition of lanes under severe occlusions
and extreme lighting conditions is mainly based on contextual and global
information. Motivated by this observation, we propose a novel, simple, yet
effective formulation aiming at ultra fast speed and the problem of challenging
scenarios. Specifically, we treat the process of lane detection as an
anchor-driven ordinal classification problem using global features. First, we
represent lanes with sparse coordinates on a series of hybrid (row and column)
anchors. With the help of the anchor-driven representation, we then reformulate
the lane detection task as an ordinal classification problem to get the
coordinates of lanes. Our method could significantly reduce the computational
cost with the anchor-driven representation. Using the large receptive field
property of the ordinal classification formulation, we could also handle
challenging scenarios. Extensive experiments on four lane detection datasets
show that our method could achieve state-of-the-art performance in terms of
both speed and accuracy. A lightweight version could even achieve 300+ frames
per second(FPS). Our code is at
https://github.com/cfzd/Ultra-Fast-Lane-Detection-v2.
Related papers
- Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - Sketch and Refine: Towards Fast and Accurate Lane Detection [69.63287721343907]
Lane detection is a challenging task due to the complexity of real-world scenarios.
Existing approaches, whether proposal-based or keypoint-based, suffer from depicting lanes effectively and efficiently.
We present a "Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based and proposal-based methods.
Experiments show that our SRLane can run at a fast speed (i.e., 278 FPS) while yielding an F1 score of 78.9%.
arXiv Detail & Related papers (2024-01-26T09:28:14Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - Lane Detection with Versatile AtrousFormer and Local Semantic Guidance [92.83267435275802]
Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
arXiv Detail & Related papers (2022-03-08T13:25:35Z) - Anchor-Free Person Search [127.88668724345195]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
Most existing works employ two-stage detectors like Faster-RCNN, yielding encouraging accuracy but with high computational overhead.
We present the Feature-Aligned Person Search Network (AlignPS), the first anchor-free framework to efficiently tackle this challenging task.
arXiv Detail & Related papers (2021-03-22T07:04:29Z) - RESA: Recurrent Feature-Shift Aggregator for Lane Detection [32.246537653680484]
We present a novel module named REcurrent Feature-Shift Aggregator (RESA) to enrich lane feature after preliminary feature extraction with an ordinary CNN.
RESA can conjecture lanes accurately in challenging scenarios with weak appearance clues by aggregating sliced feature map.
Our method achieves state-of-the-art results on two popular lane detection benchmarks (CULane and Tusimple)
arXiv Detail & Related papers (2020-08-31T16:37:30Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive
Point Blending [102.98909328368481]
CurveLane-NAS is a novel lane-sensitive architecture search framework.
It captures both long-ranged coherent and accurate short-range curve information.
It unifies both architecture search and post-processing on curve lane predictions via point blending.
arXiv Detail & Related papers (2020-07-23T17:23:26Z) - Ultra Fast Structure-aware Deep Lane Detection [15.738757958826998]
We propose a novel, simple, yet effective formulation aiming at extremely fast speed and challenging scenarios.
We treat the process of lane detection as a row-based selecting problem using global features.
Our method could achieve the state-of-the-art performance in terms of both speed and accuracy.
arXiv Detail & Related papers (2020-04-24T13:58:49Z) - Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios [26.016292792373815]
This paper exploits prior knowledge contained in digital maps, which has a strong capability to enhance the performance of detection algorithms.
In this way, only a few lane features are needed to eliminate the position error between the road shape and the real lane.
Experiments show that the proposed method can be applied to various scenarios and can run in real-time at a frequency of 20 Hz.
arXiv Detail & Related papers (2020-04-02T16:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.