Vision-Based Robust Lane Detection and Tracking under Different
Challenging Environmental Conditions
- URL: http://arxiv.org/abs/2210.10233v3
- Date: Thu, 15 Jun 2023 03:35:28 GMT
- Title: Vision-Based Robust Lane Detection and Tracking under Different
Challenging Environmental Conditions
- Authors: Samia Sultana, Boshir Ahmed, Manoranjan Paul, Muhammad Rafiqul Islam
and Shamim Ahmad
- Abstract summary: Lane marking detection is fundamental for both advanced driving assistance systems.
Here, we propose a robust lane detection and tracking method with three key technologies.
Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame.
- Score: 8.312192184427762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lane marking detection is fundamental for both advanced driving assistance
systems. However, detecting lane is highly challenging when the visibility of a
road lane marking is low due to real-life challenging environment and adverse
weather. Most of the lane detection methods suffer from four types of
challenges: (i) light effects i.e., shadow, glare of light, reflection etc.;
(ii) Obscured visibility of eroded, blurred, colored and cracked lane caused by
natural disasters and adverse weather; (iii) lane marking occlusion by
different objects from surroundings (wiper, vehicles etc.); and (iv) presence
of confusing lane like lines inside the lane view e.g., guardrails, pavement
marking, road divider etc. Here, we propose a robust lane detection and
tracking method with three key technologies. First, we introduce a
comprehensive intensity threshold range (CITR) to improve the performance of
the canny operator in detecting low intensity lane edges. Second, we propose a
two-step lane verification technique, the angle based geometric constraint
(AGC) and length-based geometric constraint (LGC) followed by Hough Transform,
to verify the characteristics of lane marking and to prevent incorrect lane
detection. Finally, we propose a novel lane tracking technique, by defining a
range of horizontal lane position (RHLP) along the x axis which will be
updating with respect to the lane position of previous frame. It can keep track
of the lane position when either left or right or both lane markings are
partially and fully invisible. To evaluate the performance of the proposed
method we used the DSDLDE [1] and SLD [2] dataset with 1080x1920 and 480x720
resolutions at 24 and 25 frames/sec respectively. Experimental results show
that the average detection rate is 97.55%, and the average processing time is
22.33 msec/frame, which outperform the state of-the-art method.
Related papers
- Sketch and Refine: Towards Fast and Accurate Lane Detection [69.63287721343907]
Lane detection is a challenging task due to the complexity of real-world scenarios.
Existing approaches, whether proposal-based or keypoint-based, suffer from depicting lanes effectively and efficiently.
We present a "Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based and proposal-based methods.
Experiments show that our SRLane can run at a fast speed (i.e., 278 FPS) while yielding an F1 score of 78.9%.
arXiv Detail & Related papers (2024-01-26T09:28:14Z) - Decoupling the Curve Modeling and Pavement Regression for Lane Detection [67.22629246312283]
curve-based lane representation is a popular approach in many lane detection methods.
We propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression.
arXiv Detail & Related papers (2023-09-19T11:24:14Z) - Prior Based Online Lane Graph Extraction from Single Onboard Camera
Image [133.68032636906133]
We tackle online estimation of the lane graph from a single onboard camera image.
The prior is extracted from the dataset through a transformer based Wasserstein Autoencoder.
The autoencoder is then used to enhance the initial lane graph estimates.
arXiv Detail & Related papers (2023-07-25T08:58:26Z) - An Efficient Transformer for Simultaneous Learning of BEV and Lane
Representations in 3D Lane Detection [55.281369497158515]
We propose an efficient transformer for 3D lane detection.
Different from the vanilla transformer, our model contains a cross-attention mechanism to simultaneously learn lane and BEV representations.
Our method obtains 2D and 3D lane predictions by applying the lane features to the image-view and BEV features, respectively.
arXiv Detail & Related papers (2023-06-08T04:18:31Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - RONELDv2: A faster, improved lane tracking method [1.3965477771846408]
Lane detection is an integral part of control systems in autonomous vehicles and lane departure warning systems.
This paper proposes an improved, lighter weight lane detection method, RONELDv2.
Experiments using the proposed improvements show a consistent increase in lane detection accuracy results across different datasets and deep learning models.
arXiv Detail & Related papers (2022-02-26T13:12:09Z) - Preprocessing Methods of Lane Detection and Tracking for Autonomous
Driving [0.0]
Real time lane detection and tracking (LDT) is one of the most consequential parts to performing the above tasks.
In this paper, we survey preprocessing methods for detecting lane marking as well as tracking lane boundaries in real time focusing on vision-based system.
arXiv Detail & Related papers (2021-04-10T13:03:52Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z) - Real-Time Lane ID Estimation Using Recurrent Neural Networks With Dual
Convention [0.0]
We propose a vision-only (i.e. monocular camera) solution to the problem based on a dual left-right convention.
We achieve more than 95% accuracy on a challenging test set with extreme conditions and different routes.
arXiv Detail & Related papers (2020-01-14T10:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.