Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units
- URL: http://arxiv.org/abs/2008.03922v2
- Date: Sat, 27 Feb 2021 11:59:41 GMT
- Title: Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units
- Authors: Jiyong Zhang, Tao Deng, Fei Yan and Wenbo Liu
- Abstract summary: Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
- Score: 11.968518335236787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane detection is one of the indispensable and key elements of self-driving
environmental perception. Many lane detection models have been proposed,
solving lane detection under challenging conditions, including intersection
merging and splitting, curves, boundaries, occlusions and combinations of scene
types. Nevertheless, lane detection will remain an open problem for some time
to come. The ability to cope well with those challenging scenes impacts greatly
the applications of lane detection on advanced driver assistance systems
(ADASs). In this paper, a spatio-temporal network with double Convolutional
Gated Recurrent Units (ConvGRUs) is proposed to address lane detection in
challenging scenes. Both of ConvGRUs have the same structures, but different
locations and functions in our network. One is used to extract the information
of the most likely low-level features of lane markings. The extracted features
are input into the next layer of the end-to-end network after concatenating
them with the outputs of some blocks. The other one takes some continuous
frames as its input to process the spatio-temporal driving information.
Extensive experiments on the large-scale TuSimple lane marking challenge
dataset and Unsupervised LLAMAS dataset demonstrate that the proposed model can
effectively detect lanes in the challenging driving scenes. Our model can
outperform the state-of-the-art lane detection models.
Related papers
- Monocular Lane Detection Based on Deep Learning: A Survey [51.19079381823076]
Lane detection plays an important role in autonomous driving perception systems.
As deep learning algorithms gain popularity, monocular lane detection methods based on deep learning have demonstrated superior performance.
This paper presents a comprehensive overview of existing methods, encompassing both the increasingly mature 2D lane detection approaches and the developing 3D lane detection works.
arXiv Detail & Related papers (2024-11-25T12:09:43Z) - Attention-based U-Net Method for Autonomous Lane Detection [0.5461938536945723]
Two deep learning-based lane recognition methods are proposed in this study.
The first method employs the Feature Pyramid Network (FPN) model, delivering an impressive 87.59% accuracy in detecting road lanes.
The second method, which incorporates attention layers into the U-Net model, significantly boosts the performance of semantic segmentation tasks.
arXiv Detail & Related papers (2024-11-16T22:20:11Z) - OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping [84.65114565766596]
We present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure.
OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes.
We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
arXiv Detail & Related papers (2023-04-20T16:31:22Z) - Graph-based Topology Reasoning for Driving Scenes [102.35885039110057]
We present TopoNet, the first end-to-end framework capable of abstracting traffic knowledge beyond conventional perception tasks.
We evaluate TopoNet on the challenging scene understanding benchmark, OpenLane-V2.
arXiv Detail & Related papers (2023-04-11T15:23:29Z) - Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - Laneformer: Object-aware Row-Column Transformers for Lane Detection [96.62919884511287]
Laneformer is a transformer-based architecture tailored for lane detection in autonomous driving.
Inspired by recent advances of the transformer encoder-decoder architecture in various vision tasks, we move forwards to design a new end-to-end Laneformer architecture.
arXiv Detail & Related papers (2022-03-18T10:14:35Z) - Lane Detection with Versatile AtrousFormer and Local Semantic Guidance [92.83267435275802]
Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
arXiv Detail & Related papers (2022-03-08T13:25:35Z) - Lane detection in complex scenes based on end-to-end neural network [10.955885950313103]
Lane detection is a key problem to solve the division of derivable areas in unmanned driving.
We propose an end-to-end network to lane detection in a variety of complex scenes.
Our network was tested on the CULane database and its F1-measure with IOU threshold of 0.5 can reach 71.9%.
arXiv Detail & Related papers (2020-10-26T08:46:35Z) - Heatmap-based Vanishing Point boosts Lane Detection [3.8170259685864165]
We propose a new multi-task fusion network architecture for high-precision lane detection.
The proposed fusion strategy was tested using the public CULane dataset.
The experimental results suggest that the lane detection accuracy of our method outperforms those of state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2020-07-30T17:17:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.