PolyLaneNet: Lane Estimation via Deep Polynomial Regression
- URL: http://arxiv.org/abs/2004.10924v2
- Date: Tue, 14 Jul 2020 17:02:54 GMT
- Title: PolyLaneNet: Lane Estimation via Deep Polynomial Regression
- Authors: Lucas Tabelini, Rodrigo Berriel, Thiago M. Paix\~ao, Claudine Badue,
Alberto F. De Souza and Thiago Oliveira-Santos
- Abstract summary: We present a novel method for lane detection that uses an image from a forward-looking camera mounted in the vehicle.
The proposed method is shown to be competitive with existing state-of-the-art methods in the TuSimple dataset.
We provide source code and trained models that allow others to replicate all the results shown in this paper.
- Score: 9.574421369309949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main factors that contributed to the large advances in autonomous
driving is the advent of deep learning. For safer self-driving vehicles, one of
the problems that has yet to be solved completely is lane detection. Since
methods for this task have to work in real-time (+30 FPS), they not only have
to be effective (i.e., have high accuracy) but they also have to be efficient
(i.e., fast). In this work, we present a novel method for lane detection that
uses as input an image from a forward-looking camera mounted in the vehicle and
outputs polynomials representing each lane marking in the image, via deep
polynomial regression. The proposed method is shown to be competitive with
existing state-of-the-art methods in the TuSimple dataset while maintaining its
efficiency (115 FPS). Additionally, extensive qualitative results on two
additional public datasets are presented, alongside with limitations in the
evaluation metrics used by recent works for lane detection. Finally, we provide
source code and trained models that allow others to replicate all the results
shown in this paper, which is surprisingly rare in state-of-the-art lane
detection methods. The full source code and pretrained models are available at
https://github.com/lucastabelini/PolyLaneNet.
Related papers
- Building Lane-Level Maps from Aerial Images [9.185929396989083]
We introduce for the first time a large-scale aerial image dataset built for lane detection.
We develop a baseline deep learning lane detection method from aerial images, called AerialLaneNet.
Our approach achieves significant improvement compared with the state-of-the-art methods on our new dataset.
arXiv Detail & Related papers (2023-12-20T21:58:45Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Decoupling the Curve Modeling and Pavement Regression for Lane Detection [67.22629246312283]
curve-based lane representation is a popular approach in many lane detection methods.
We propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression.
arXiv Detail & Related papers (2023-09-19T11:24:14Z) - Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image [133.68032636906133]
We tackle online estimation of the lane graph from a single onboard camera image.
The prior is extracted from the dataset through a transformer based Wasserstein Autoencoder.
The autoencoder is then used to enhance the initial lane graph estimates.
arXiv Detail & Related papers (2023-07-25T08:58:26Z) - Online Lane Graph Extraction from Onboard Video [133.68032636906133]
We use the video stream from an onboard camera for online extraction of the surrounding's lane graph.
Using video, instead of a single image, as input poses both benefits and challenges in terms of combining the information from different timesteps.
A single model of this proposed simple, yet effective, method can process any number of images, including one, to produce accurate lane graphs.
arXiv Detail & Related papers (2023-04-03T12:36:39Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - Lane Detection with Versatile AtrousFormer and Local Semantic Guidance [92.83267435275802]
Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
arXiv Detail & Related papers (2022-03-08T13:25:35Z) - RONELDv2: A faster, improved lane tracking method [1.3965477771846408]
Lane detection is an integral part of control systems in autonomous vehicles and lane departure warning systems.
This paper proposes an improved, lighter weight lane detection method, RONELDv2.
Experiments using the proposed improvements show a consistent increase in lane detection accuracy results across different datasets and deep learning models.
arXiv Detail & Related papers (2022-02-26T13:12:09Z) - RONELD: Robust Neural Network Output Enhancement for Active Lane
Detection [1.3965477771846408]
Recent state-of-the-art lane detection algorithms utilize convolutional neural networks (CNNs) to train deep learning models.
We present a real-time robust neural network output enhancement for active lane detection (RONELD)
Experimental results demonstrate an up to two-fold increase in accuracy using RONELD.
arXiv Detail & Related papers (2020-10-19T14:22:47Z) - SUPER: A Novel Lane Detection System [26.417172945374364]
We propose a real-time lane detection system, called Scene Understanding Physics-Enhanced Real-time (SUPER) algorithm.
We train the proposed system using heterogeneous data from Cityscapes, Vistas and Apollo, and evaluate the performance on four completely separate datasets.
Preliminary test results show promising real-time lane-detection performance compared with the Mobileye.
arXiv Detail & Related papers (2020-05-14T21:40:39Z) - Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios [26.016292792373815]
This paper exploits prior knowledge contained in digital maps, which has a strong capability to enhance the performance of detection algorithms.
In this way, only a few lane features are needed to eliminate the position error between the road shape and the real lane.
Experiments show that the proposed method can be applied to various scenarios and can run in real-time at a frequency of 20 Hz.
arXiv Detail & Related papers (2020-04-02T16:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.