Traffic Lane Detection using FCN
- URL: http://arxiv.org/abs/2004.08977v1
- Date: Sun, 19 Apr 2020 22:25:12 GMT
- Title: Traffic Lane Detection using FCN
- Authors: Shengchang Zhang, Ahmed EI Koubia, Khaled Abdul Karim Mohammed
- Abstract summary: lane detection is a crucial technology that enables self-driving cars to properly position themselves in a multi-lane urban driving environments.
In this project, we designed an-volutional Decoder, Fully Convolutional Network for lane detection.
This model was applied to a real-world large scale dataset and achieved a level of accuracy that outperformed our baseline model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic lane detection is a crucial technology that enables self-driving
cars to properly position themselves in a multi-lane urban driving
environments. However, detecting diverse road markings in various weather
conditions is a challenging task for conventional image processing or computer
vision techniques. In recent years, the application of Deep Learning and Neural
Networks in this area has proven to be very effective. In this project, we
designed an Encoder- Decoder, Fully Convolutional Network for lane detection.
This model was applied to a real-world large scale dataset and achieved a level
of accuracy that outperformed our baseline model.
Related papers
- Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - RainSD: Rain Style Diversification Module for Image Synthesis
Enhancement using Feature-Level Style Distribution [5.500457283114346]
This paper presents a synthetic road dataset with sensor blockage generated from real road dataset BDD100K.
Using this dataset, the degradation of diverse multi-task networks for autonomous driving has been thoroughly evaluated and analyzed.
The tendency of the performance degradation of deep neural network-based perception systems for autonomous vehicle has been analyzed in depth.
arXiv Detail & Related papers (2023-12-31T11:30:42Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Driving Style Representation in Convolutional Recurrent Neural Network
Model of Driver Identification [8.007800530105191]
We present a deep-neural-network architecture, we term D-CRNN, for building high-fidelity representations for driving style.
Using CNN, we capture semantic patterns of driver behavior from trajectories.
We then find temporal dependencies between these semantic patterns using RNN to encode driving style.
arXiv Detail & Related papers (2021-02-11T04:33:43Z) - Convolutional Recurrent Network for Road Boundary Extraction [99.55522995570063]
We tackle the problem of drivable road boundary extraction from LiDAR and camera imagery.
We design a structured model where a fully convolutional network obtains deep features encoding the location and direction of road boundaries.
We showcase the effectiveness of our method on a large North American city where we obtain perfect topology of road boundaries 99.3% of the time.
arXiv Detail & Related papers (2020-12-21T18:59:12Z) - Deep traffic light detection by overlaying synthetic context on
arbitrary natural images [49.592798832978296]
We propose a method to generate artificial traffic-related training data for deep traffic light detectors.
This data is generated using basic non-realistic computer graphics to blend fake traffic scenes on top of arbitrary image backgrounds.
It also tackles the intrinsic data imbalance problem in traffic light datasets, caused mainly by the low amount of samples of the yellow state.
arXiv Detail & Related papers (2020-11-07T19:57:22Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - Multi-lane Detection Using Instance Segmentation and Attentive Voting [0.0]
We propose a novel solution to multi-lane detection, which outperforms state of the art methods in terms of both accuracy and speed.
We are able to obtain a lane segmentation accuracy of 99.87% running at 54.53 fps (average)
arXiv Detail & Related papers (2020-01-01T16:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.