Row-wise LiDAR Lane Detection Network with Lane Correlation Refinement
- URL: http://arxiv.org/abs/2210.08745v1
- Date: Mon, 17 Oct 2022 04:47:08 GMT
- Title: Row-wise LiDAR Lane Detection Network with Lane Correlation Refinement
- Authors: Dong-Hee Paek, Kevin Tirta Wijaya, Seung-Hyun Kong
- Abstract summary: We propose a novel two-stage LiDAR lane detection network with row-wise detection approach.
The first-stage network produces lane proposals through a global feature correlator backbone and a row-wise detection head.
The proposed network advances the state-of-the-art in terms of F1-score with 30% less GFLOPs.
- Score: 1.6832237384792461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane detection is one of the most important functions for autonomous driving.
In recent years, deep learning-based lane detection networks with RGB camera
images have shown promising performance. However, camera-based methods are
inherently vulnerable to adverse lighting conditions such as poor or dazzling
lighting. Unlike camera, LiDAR sensor is robust to the lighting conditions. In
this work, we propose a novel two-stage LiDAR lane detection network with
row-wise detection approach. The first-stage network produces lane proposals
through a global feature correlator backbone and a row-wise detection head.
Meanwhile, the second-stage network refines the feature map of the first-stage
network via attention-based mechanism between the local features around the
lane proposals, and outputs a set of new lane proposals. Experimental results
on the K-Lane dataset show that the proposed network advances the
state-of-the-art in terms of F1-score with 30% less GFLOPs. In addition, the
second-stage network is found to be especially robust to lane occlusions, thus,
demonstrating the robustness of the proposed network for driving in crowded
environments.
Related papers
- Monocular Lane Detection Based on Deep Learning: A Survey [51.19079381823076]
Lane detection plays an important role in autonomous driving perception systems.
As deep learning algorithms gain popularity, monocular lane detection methods based on deep learning have demonstrated superior performance.
This paper presents a comprehensive overview of existing methods, encompassing both the increasingly mature 2D lane detection approaches and the developing 3D lane detection works.
arXiv Detail & Related papers (2024-11-25T12:09:43Z) - FENet: Focusing Enhanced Network for Lane Detection [0.0]
This research pioneers networks augmented with Focusing Sampling, Partial Field of View Evaluation, Enhanced FPN architecture and Directional IoU Loss.
Experiments demonstrate our Focusing Sampling strategy, emphasizing vital distant details unlike uniform approaches.
Future directions include collecting on-road data and integrating complementary dual frameworks to further breakthroughs guided by human perception principles.
arXiv Detail & Related papers (2023-12-28T17:52:09Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - Pseudo-LiDAR Based Road Detection [5.9106199000537645]
We propose a novel road detection approach with RGB being the only input during inference.
We exploit pseudo-LiDAR using depth estimation, and propose a feature fusion network where RGB and learned depth information are fused.
The proposed method achieves state-of-the-art performance on two challenging benchmarks, KITTI and R2D.
arXiv Detail & Related papers (2021-07-28T11:21:42Z) - LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision
Sensor [0.0]
This paper explores the novel application of lane marking detection using an event camera.
The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling block.
The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction.
arXiv Detail & Related papers (2020-09-17T02:15:41Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.