Lane Detection System for Driver Assistance in Vehicles
- URL: http://arxiv.org/abs/2410.04046v1
- Date: Sat, 5 Oct 2024 05:53:29 GMT
- Title: Lane Detection System for Driver Assistance in Vehicles
- Authors: Kauan Divino Pouso Mariano, Fernanda de Castro Fernandes, Luan Gabriel Silva Oliveira, Lyan Eduardo Sakuno Rodrigues, Matheus Andrade Brandão,
- Abstract summary: This work presents the development of a lane detection system aimed at assisting the driving of conventional and autonomous vehicles.
The system was implemented using traditional computer vision techniques, focusing on robustness and efficiency to operate in real-time.
It is concluded that, despite its limitations, the traditional computer vision approach shows significant potential for application in driver assistance systems and autonomous navigation.
- Score: 36.136619420474766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents the development of a lane detection system aimed at assisting the driving of conventional and autonomous vehicles. The system was implemented using traditional computer vision techniques, focusing on robustness and efficiency to operate in real-time, even under adverse conditions such as worn-out lanes and weather variations. The methodology employs an image processing pipeline that includes camera calibration, distortion correction, perspective transformation, and binary image generation. Lane detection is performed using sliding window techniques and segmentation based on gradients and color channels, enabling the precise identification of lanes in various road scenarios. The results indicate that the system can effectively detect and track lanes, performing well under different lighting conditions and road surfaces. However, challenges were identified in extreme situations, such as intense shadows and sharp curves. It is concluded that, despite its limitations, the traditional computer vision approach shows significant potential for application in driver assistance systems and autonomous navigation, with room for future improvements.
Related papers
- Monocular Lane Detection Based on Deep Learning: A Survey [51.19079381823076]
Lane detection plays an important role in autonomous driving perception systems.
As deep learning algorithms gain popularity, monocular lane detection methods based on deep learning have demonstrated superior performance.
This paper presents a comprehensive overview of existing methods, encompassing both the increasingly mature 2D lane detection approaches and the developing 3D lane detection works.
arXiv Detail & Related papers (2024-11-25T12:09:43Z) - How to deal with glare for improved perception of Autonomous Vehicles [0.0]
Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth.
vision-based environment perception systems can be easily affected by glare in the presence of a bright source of light.
arXiv Detail & Related papers (2024-04-17T02:05:05Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Online Camera-to-ground Calibration for Autonomous Driving [26.357898919134833]
We propose an online monocular camera-to-ground calibration solution that does not utilize any specific targets while driving.
We provide metrics to quantify calibration performance and stopping criteria to report/broadcast our satisfying calibration results.
arXiv Detail & Related papers (2023-03-30T04:01:48Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Preprocessing Methods of Lane Detection and Tracking for Autonomous
Driving [0.0]
Real time lane detection and tracking (LDT) is one of the most consequential parts to performing the above tasks.
In this paper, we survey preprocessing methods for detecting lane marking as well as tracking lane boundaries in real time focusing on vision-based system.
arXiv Detail & Related papers (2021-04-10T13:03:52Z) - Traffic Lane Detection using FCN [0.0]
lane detection is a crucial technology that enables self-driving cars to properly position themselves in a multi-lane urban driving environments.
In this project, we designed an-volutional Decoder, Fully Convolutional Network for lane detection.
This model was applied to a real-world large scale dataset and achieved a level of accuracy that outperformed our baseline model.
arXiv Detail & Related papers (2020-04-19T22:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.