Multi-lane Detection Using Instance Segmentation and Attentive Voting
- URL: http://arxiv.org/abs/2001.00236v1
- Date: Wed, 1 Jan 2020 16:48:42 GMT
- Title: Multi-lane Detection Using Instance Segmentation and Attentive Voting
- Authors: Donghoon Chang (1), Vinjohn Chirakkal (2), Shubham Goswami (3),
Munawar Hasan (1), Taekwon Jung (2), Jinkeon Kang (1,3), Seok-Cheol Kee (4),
Dongkyu Lee (5), Ajit Pratap Singh (1) ((1) Department of Computer Science,
IIIT-Delhi, India, (2) Springcloud Inc., Korea, (3) Center for Information
Security Technologies (CIST), Korea University, Korea, (4) Smart Car Research
Center, Chungbuk National University, Korea, (5) Department of Smart Car
Engineering, Chungbuk National University, Korea)
- Abstract summary: We propose a novel solution to multi-lane detection, which outperforms state of the art methods in terms of both accuracy and speed.
We are able to obtain a lane segmentation accuracy of 99.87% running at 54.53 fps (average)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Autonomous driving is becoming one of the leading industrial research areas.
Therefore many automobile companies are coming up with semi to fully autonomous
driving solutions. Among these solutions, lane detection is one of the vital
driver-assist features that play a crucial role in the decision-making process
of the autonomous vehicle. A variety of solutions have been proposed to detect
lanes on the road, which ranges from using hand-crafted features to the
state-of-the-art end-to-end trainable deep learning architectures. Most of
these architectures are trained in a traffic constrained environment. In this
paper, we propose a novel solution to multi-lane detection, which outperforms
state of the art methods in terms of both accuracy and speed. To achieve this,
we also offer a dataset with a more intuitive labeling scheme as compared to
other benchmark datasets. Using our approach, we are able to obtain a lane
segmentation accuracy of 99.87% running at 54.53 fps (average).
Related papers
- Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - ENet-21: An Optimized light CNN Structure for Lane Detection [1.4542411354617986]
This study develops an optimal structure for the lane detection problem.
It offers a promising solution for driver assistance features in modern vehicles.
Experiments on the TuSimple dataset support the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-03-28T19:07:26Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - End-to-end Autonomous Driving: Challenges and Frontiers [45.391430626264764]
We provide a comprehensive analysis of more than 270 papers, covering the motivation, roadmap, methodology, challenges, and future trends in end-to-end autonomous driving.
We delve into several critical challenges, including multi-modality, interpretability, causal confusion, robustness, and world models, amongst others.
We discuss current advancements in foundation models and visual pre-training, as well as how to incorporate these techniques within the end-to-end driving framework.
arXiv Detail & Related papers (2023-06-29T14:17:24Z) - Penalty-Based Imitation Learning With Cross Semantics Generation Sensor
Fusion for Autonomous Driving [1.2749527861829049]
In this paper, we provide a penalty-based imitation learning approach to integrate multiple modalities of information.
We observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser.
Our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%.
arXiv Detail & Related papers (2023-03-21T14:29:52Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Multi Lane Detection [12.684545950979187]
Lane detection is a basic module in autonomous driving.
Our work is based on CNN backbone DLA-34, along with Affinity Fields.
We investigate novel decoding methods to achieve more efficient lane detection algorithm.
arXiv Detail & Related papers (2022-12-22T08:20:08Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - A Survey of End-to-End Driving: Architectures and Training Methods [0.9449650062296824]
We take a deeper look on the so called end-to-end approaches for autonomous driving, where the entire driving pipeline is replaced with a single neural network.
We review the learning methods, input and output modalities, network architectures and evaluation schemes in end-to-end driving literature.
We conclude the review with an architecture that combines the most promising elements of the end-to-end autonomous driving systems.
arXiv Detail & Related papers (2020-03-13T17:42:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.