Lane Detection with Versatile AtrousFormer and Local Semantic Guidance
- URL: http://arxiv.org/abs/2203.04067v1
- Date: Tue, 8 Mar 2022 13:25:35 GMT
- Title: Lane Detection with Versatile AtrousFormer and Local Semantic Guidance
- Authors: Jiaxing Yang, Lihe Zhang, Huchuan Lu
- Abstract summary: Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
- Score: 92.83267435275802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane detection is one of the core functions in autonomous driving and has
aroused widespread attention recently. The networks to segment lane instances,
especially with bad appearance, must be able to explore lane distribution
properties. Most existing methods tend to resort to CNN-based techniques. A few
have a try on incorporating the recent adorable, the seq2seq Transformer
\cite{transformer}. However, their innate drawbacks of weak global information
collection ability and exorbitant computation overhead prohibit a wide range of
the further applications. In this work, we propose Atrous Transformer
(AtrousFormer) to solve the problem. Its variant local AtrousFormer is
interleaved into feature extractor to enhance extraction. Their collecting
information first by rows and then by columns in a dedicated manner finally
equips our network with stronger information gleaning ability and better
computation efficiency. To further improve the performance, we also propose a
local semantic guided decoder to delineate the identities and shapes of lanes
more accurately, in which the predicted Gaussian map of the starting point of
each lane serves to guide the process. Extensive results on three challenging
benchmarks (CULane, TuSimple, and BDD100K) show that our network performs
favorably against the state of the arts.
Related papers
- Monocular Lane Detection Based on Deep Learning: A Survey [51.19079381823076]
Lane detection plays an important role in autonomous driving perception systems.
As deep learning algorithms gain popularity, monocular lane detection methods based on deep learning have demonstrated superior performance.
This paper presents a comprehensive overview of existing methods, encompassing both the increasingly mature 2D lane detection approaches and the developing 3D lane detection works.
arXiv Detail & Related papers (2024-11-25T12:09:43Z) - Sketch and Refine: Towards Fast and Accurate Lane Detection [69.63287721343907]
Lane detection is a challenging task due to the complexity of real-world scenarios.
Existing approaches, whether proposal-based or keypoint-based, suffer from depicting lanes effectively and efficiently.
We present a "Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based and proposal-based methods.
Experiments show that our SRLane can run at a fast speed (i.e., 278 FPS) while yielding an F1 score of 78.9%.
arXiv Detail & Related papers (2024-01-26T09:28:14Z) - HoughLaneNet: Lane Detection with Deep Hough Transform and Dynamic
Convolution [8.97991745734826]
Lanes can present difficulties for detection, as they can be narrow, fragmented, and often obscured by heavy traffic.
We propose a hierarchical Deep Hough Transform (DHT) approach that combines all lane features in an image into the Hough parameter space.
Our proposed network structure demonstrates improved performance in detecting heavily occluded or worn lane images.
arXiv Detail & Related papers (2023-07-07T10:08:29Z) - BSNet: Lane Detection via Draw B-spline Curves Nearby [21.40607319558899]
We revisit the curve-based lane detection methods from the perspectives of the lane representations' globality and locality.
We design a simple yet efficient network BSNet to ensure the acquisition of global and local features.
The proposed methods achieve state-of-the-art performance on the Tusimple, CULane, and LLAMAS datasets.
arXiv Detail & Related papers (2023-01-17T14:25:40Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - Laneformer: Object-aware Row-Column Transformers for Lane Detection [96.62919884511287]
Laneformer is a transformer-based architecture tailored for lane detection in autonomous driving.
Inspired by recent advances of the transformer encoder-decoder architecture in various vision tasks, we move forwards to design a new end-to-end Laneformer architecture.
arXiv Detail & Related papers (2022-03-18T10:14:35Z) - End-to-end Lane Shape Prediction with Transformers [13.103463647059634]
Lane detection is widely used for lane departure warning and adaptive cruise control in autonomous vehicles.
We propose an end-to-end method that directly outputs parameters of a lane shape model.
The proposed method is validated on the TuSimple benchmark and shows state-of-the-art accuracy with the most lightweight model size and fastest speed.
arXiv Detail & Related papers (2020-11-09T07:42:55Z) - RESA: Recurrent Feature-Shift Aggregator for Lane Detection [32.246537653680484]
We present a novel module named REcurrent Feature-Shift Aggregator (RESA) to enrich lane feature after preliminary feature extraction with an ordinary CNN.
RESA can conjecture lanes accurately in challenging scenarios with weak appearance clues by aggregating sliced feature map.
Our method achieves state-of-the-art results on two popular lane detection benchmarks (CULane and Tusimple)
arXiv Detail & Related papers (2020-08-31T16:37:30Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.