ENet-21: An Optimized light CNN Structure for Lane Detection
- URL: http://arxiv.org/abs/2403.19782v1
- Date: Thu, 28 Mar 2024 19:07:26 GMT
- Title: ENet-21: An Optimized light CNN Structure for Lane Detection
- Authors: Seyed Rasoul Hosseini, Mohammad Teshnehlab,
- Abstract summary: This study develops an optimal structure for the lane detection problem.
It offers a promising solution for driver assistance features in modern vehicles.
Our method uses less complex CNN architecture than exi.
- Score: 0.8977807139044119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane detection for autonomous vehicles is an important concept, yet it is a challenging issue of driver assistance systems in modern vehicles. The emergence of deep learning leads to significant progress in self-driving cars. Conventional deep learning-based methods handle lane detection problems as a binary segmentation task and determine whether a pixel belongs to a line. These methods rely on the assumption of a fixed number of lanes, which does not always work. This study aims to develop an optimal structure for the lane detection problem, offering a promising solution for driver assistance features in modern vehicles by utilizing a machine learning method consisting of binary segmentation and Affinity Fields that can manage varying numbers of lanes and lane change scenarios. In this approach, the Convolutional Neural Network (CNN), is selected as a feature extractor, and the final output is obtained through clustering of the semantic segmentation and Affinity Field outputs. Our method uses less complex CNN architecture than exi
Related papers
- LaneSegNet: Map Learning with Lane Segment Perception for Autonomous
Driving [60.55208681215818]
We introduce LaneSegNet, the first end-to-end mapping network generating lane segments to obtain a complete representation of the road structure.
Our algorithm features two key modifications. One is a lane attention module to capture pivotal region details within the long-range feature space.
On the OpenLane-V2 dataset, LaneSegNet outperforms previous counterparts by a substantial gain across three tasks.
arXiv Detail & Related papers (2023-12-26T16:22:10Z) - Graph-based Topology Reasoning for Driving Scenes [102.35885039110057]
We present TopoNet, the first end-to-end framework capable of abstracting traffic knowledge beyond conventional perception tasks.
We evaluate TopoNet on the challenging scene understanding benchmark, OpenLane-V2.
arXiv Detail & Related papers (2023-04-11T15:23:29Z) - Multi Lane Detection [12.684545950979187]
Lane detection is a basic module in autonomous driving.
Our work is based on CNN backbone DLA-34, along with Affinity Fields.
We investigate novel decoding methods to achieve more efficient lane detection algorithm.
arXiv Detail & Related papers (2022-12-22T08:20:08Z) - RCLane: Relay Chain Prediction for Lane Detection [76.62424079494285]
We present a new method for lane detection based on relay chain prediction.
Our strategy allows us to establish new state-of-the-art on four major benchmarks including TuSimple, CULane, CurveLanes and LLAMAS.
arXiv Detail & Related papers (2022-07-19T16:48:39Z) - Laneformer: Object-aware Row-Column Transformers for Lane Detection [96.62919884511287]
Laneformer is a transformer-based architecture tailored for lane detection in autonomous driving.
Inspired by recent advances of the transformer encoder-decoder architecture in various vision tasks, we move forwards to design a new end-to-end Laneformer architecture.
arXiv Detail & Related papers (2022-03-18T10:14:35Z) - Lane Detection with Versatile AtrousFormer and Local Semantic Guidance [92.83267435275802]
Lane detection is one of the core functions in autonomous driving.
Most existing methods tend to resort to CNN-based techniques.
We propose Atrous Transformer (AtrousFormer) to solve the problem.
arXiv Detail & Related papers (2022-03-08T13:25:35Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - Lane detection in complex scenes based on end-to-end neural network [10.955885950313103]
Lane detection is a key problem to solve the division of derivable areas in unmanned driving.
We propose an end-to-end network to lane detection in a variety of complex scenes.
Our network was tested on the CULane database and its F1-measure with IOU threshold of 0.5 can reach 71.9%.
arXiv Detail & Related papers (2020-10-26T08:46:35Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive
Point Blending [102.98909328368481]
CurveLane-NAS is a novel lane-sensitive architecture search framework.
It captures both long-ranged coherent and accurate short-range curve information.
It unifies both architecture search and post-processing on curve lane predictions via point blending.
arXiv Detail & Related papers (2020-07-23T17:23:26Z) - Multi-lane Detection Using Instance Segmentation and Attentive Voting [0.0]
We propose a novel solution to multi-lane detection, which outperforms state of the art methods in terms of both accuracy and speed.
We are able to obtain a lane segmentation accuracy of 99.87% running at 54.53 fps (average)
arXiv Detail & Related papers (2020-01-01T16:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.