Sketch and Refine: Towards Fast and Accurate Lane Detection
- URL: http://arxiv.org/abs/2401.14729v1
- Date: Fri, 26 Jan 2024 09:28:14 GMT
- Title: Sketch and Refine: Towards Fast and Accurate Lane Detection
- Authors: Chao Chen, Jie Liu, Chang Zhou, Jie Tang, Gangshan Wu
- Abstract summary: Lane detection is a challenging task due to the complexity of real-world scenarios.
Existing approaches, whether proposal-based or keypoint-based, suffer from depicting lanes effectively and efficiently.
We present a "Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based and proposal-based methods.
Experiments show that our SRLane can run at a fast speed (i.e., 278 FPS) while yielding an F1 score of 78.9%.
- Score: 69.63287721343907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane detection is to determine the precise location and shape of lanes on the
road. Despite efforts made by current methods, it remains a challenging task
due to the complexity of real-world scenarios. Existing approaches, whether
proposal-based or keypoint-based, suffer from depicting lanes effectively and
efficiently. Proposal-based methods detect lanes by distinguishing and
regressing a collection of proposals in a streamlined top-down way, yet lack
sufficient flexibility in lane representation. Keypoint-based methods, on the
other hand, construct lanes flexibly from local descriptors, which typically
entail complicated post-processing. In this paper, we present a
"Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based
and proposal-based methods. The motivation is that local directions of lanes
are semantically simple and clear. At the "Sketch" stage, local directions of
keypoints can be easily estimated by fast convolutional layers. Then we can
build a set of lane proposals accordingly with moderate accuracy. At the
"Refine" stage, we further optimize these proposals via a novel Lane Segment
Association Module (LSAM), which allows adaptive lane segment adjustment. Last
but not least, we propose multi-level feature integration to enrich lane
feature representations more efficiently. Based on the proposed "Sketch and
Refine" paradigm, we propose a fast yet effective lane detector dubbed
"SRLane". Experiments show that our SRLane can run at a fast speed (i.e., 278
FPS) while yielding an F1 score of 78.9\%. The source code is available at:
https://github.com/passerer/SRLane.
Related papers
- LaneSegNet: Map Learning with Lane Segment Perception for Autonomous
Driving [60.55208681215818]
We introduce LaneSegNet, the first end-to-end mapping network generating lane segments to obtain a complete representation of the road structure.
Our algorithm features two key modifications. One is a lane attention module to capture pivotal region details within the long-range feature space.
On the OpenLane-V2 dataset, LaneSegNet outperforms previous counterparts by a substantial gain across three tasks.
arXiv Detail & Related papers (2023-12-26T16:22:10Z) - Decoupling the Curve Modeling and Pavement Regression for Lane Detection [67.22629246312283]
curve-based lane representation is a popular approach in many lane detection methods.
We propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression.
arXiv Detail & Related papers (2023-09-19T11:24:14Z) - BSNet: Lane Detection via Draw B-spline Curves Nearby [21.40607319558899]
We revisit the curve-based lane detection methods from the perspectives of the lane representations' globality and locality.
We design a simple yet efficient network BSNet to ensure the acquisition of global and local features.
The proposed methods achieve state-of-the-art performance on the Tusimple, CULane, and LLAMAS datasets.
arXiv Detail & Related papers (2023-01-17T14:25:40Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - A Keypoint-based Global Association Network for Lane Detection [47.93323407661912]
Lane detection is a challenging task that requires predicting complex topology shapes of lane lines and distinguishing different types of lanes simultaneously.
We propose a Global Association Network (GANet) to formulate the lane detection problem from a new perspective.
Our method outperforms previous methods with F1 score of 79.63% on CULane and 97.71% on Tusimple dataset with high FPS.
arXiv Detail & Related papers (2022-04-15T05:24:04Z) - CLRNet: Cross Layer Refinement Network for Lane Detection [36.10035201796672]
We present Cross Layer Refinement Network (CLRNet) aiming at fully utilizing both high-level and low-level features in lane detection.
CLRNet first detects lanes with high-level semantic features then performs refinement based on low-level features.
In addition to our novel network design, we introduce Line IoU loss which regresses the lane line as a whole unit to improve the localization accuracy.
arXiv Detail & Related papers (2022-03-19T16:11:35Z) - Lane Detection with Versatile AtrousFormer and Local Semantic Guidance [92.83267435275802]
Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
arXiv Detail & Related papers (2022-03-08T13:25:35Z) - CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional
Convolution [39.62595444015094]
We propose CondLaneNet, a novel top-to-down lane detection framework.
We also introduce a conditional lane detection strategy based on conditional convolution and row-wise formulation.
Our method achieves state-of-the-art performance on three benchmark datasets.
arXiv Detail & Related papers (2021-05-11T13:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.