Preprocessing Methods of Lane Detection and Tracking for Autonomous
Driving
- URL: http://arxiv.org/abs/2104.04755v1
- Date: Sat, 10 Apr 2021 13:03:52 GMT
- Title: Preprocessing Methods of Lane Detection and Tracking for Autonomous
Driving
- Authors: Akram Heidarizadeh
- Abstract summary: Real time lane detection and tracking (LDT) is one of the most consequential parts to performing the above tasks.
In this paper, we survey preprocessing methods for detecting lane marking as well as tracking lane boundaries in real time focusing on vision-based system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, researches on advanced driver assistance systems
(ADASs) have been carried out and deployed in intelligent vehicles. Systems
that have been developed can perform different tasks, such as lane keeping
assistance (LKA), lane departure warning (LDW), lane change warning (LCW) and
adaptive cruise control (ACC). Real time lane detection and tracking (LDT) is
one of the most consequential parts to performing the above tasks. Images which
are extracted from the video, contain noise and other unwanted factors such as
variation in lightening, shadow from nearby objects and etc. that requires
robust preprocessing methods for lane marking detection and tracking.
Preprocessing is critical for the subsequent steps and real time performance
because its main function is to remove the irrelevant image parts and enhance
the feature of interest. In this paper, we survey preprocessing methods for
detecting lane marking as well as tracking lane boundaries in real time
focusing on vision-based system.
Related papers
- Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping [84.65114565766596]
We present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure.
OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes.
We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
arXiv Detail & Related papers (2023-04-20T16:31:22Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Multi Lane Detection [12.684545950979187]
Lane detection is a basic module in autonomous driving.
Our work is based on CNN backbone DLA-34, along with Affinity Fields.
We investigate novel decoding methods to achieve more efficient lane detection algorithm.
arXiv Detail & Related papers (2022-12-22T08:20:08Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Vision-Based Robust Lane Detection and Tracking under Different
Challenging Environmental Conditions [8.312192184427762]
Lane marking detection is fundamental for both advanced driving assistance systems.
Here, we propose a robust lane detection and tracking method with three key technologies.
Experimental results show that the average detection rate is 97.55%, and the average processing time is 22.33 msec/frame.
arXiv Detail & Related papers (2022-10-19T01:25:21Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - RONELDv2: A faster, improved lane tracking method [1.3965477771846408]
Lane detection is an integral part of control systems in autonomous vehicles and lane departure warning systems.
This paper proposes an improved, lighter weight lane detection method, RONELDv2.
Experiments using the proposed improvements show a consistent increase in lane detection accuracy results across different datasets and deep learning models.
arXiv Detail & Related papers (2022-02-26T13:12:09Z) - LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision
Sensor [0.0]
This paper explores the novel application of lane marking detection using an event camera.
The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling block.
The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction.
arXiv Detail & Related papers (2020-09-17T02:15:41Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.