Pseudo-LiDAR Based Road Detection
- URL: http://arxiv.org/abs/2107.13279v1
- Date: Wed, 28 Jul 2021 11:21:42 GMT
- Title: Pseudo-LiDAR Based Road Detection
- Authors: Libo Sun, Haokui Zhang and Wei Yin
- Abstract summary: We propose a novel road detection approach with RGB being the only input during inference.
We exploit pseudo-LiDAR using depth estimation, and propose a feature fusion network where RGB and learned depth information are fused.
The proposed method achieves state-of-the-art performance on two challenging benchmarks, KITTI and R2D.
- Score: 5.9106199000537645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Road detection is a critically important task for self-driving cars. By
employing LiDAR data, recent works have significantly improved the accuracy of
road detection. Relying on LiDAR sensors limits the wide application of those
methods when only cameras are available. In this paper, we propose a novel road
detection approach with RGB being the only input during inference.
Specifically, we exploit pseudo-LiDAR using depth estimation, and propose a
feature fusion network where RGB and learned depth information are fused for
improved road detection. To further optimize the network structure and improve
the efficiency of the network. we search for the network structure of the
feature fusion module using NAS techniques. Finally, be aware of that
generating pseudo-LiDAR from RGB via depth estimation introduces extra
computational costs and relies on depth estimation networks, we design a
modality distillation strategy and leverage it to further free our network from
these extra computational cost and dependencies during inference. The proposed
method achieves state-of-the-art performance on two challenging benchmarks,
KITTI and R2D.
Related papers
- Enhanced Automotive Object Detection via RGB-D Fusion in a DiffusionDet Framework [0.0]
Vision-based autonomous driving requires reliable and efficient object detection.
This work proposes a DiffusionDet-based framework that exploits data fusion from the monocular camera and depth sensor to provide the RGB and depth (RGB-D) data.
By integrating the textural and color features from RGB images with the spatial depth information from the LiDAR sensors, the proposed framework employs a feature fusion that substantially enhances object detection of automotive targets.
arXiv Detail & Related papers (2024-06-05T10:24:00Z) - TEDNet: Twin Encoder Decoder Neural Network for 2D Camera and LiDAR Road Detection [2.8038082486377114]
A novel Convolutional Neural Network model is proposed for the accurate estimation of the roadway surface.
Our model is based on the use of a Twin-Decoder Neural Network (TEDNet) for independent camera and LiDAR feature extraction.
Bird's Eye View projections of the camera and LiDAR data are used in this model to perform semantic segmentation on whether each pixel belongs to the road surface.
arXiv Detail & Related papers (2024-05-14T08:45:34Z) - Row-wise LiDAR Lane Detection Network with Lane Correlation Refinement [1.6832237384792461]
We propose a novel two-stage LiDAR lane detection network with row-wise detection approach.
The first-stage network produces lane proposals through a global feature correlator backbone and a row-wise detection head.
The proposed network advances the state-of-the-art in terms of F1-score with 30% less GFLOPs.
arXiv Detail & Related papers (2022-10-17T04:47:08Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Fast Road Segmentation via Uncertainty-aware Symmetric Network [15.05244258071472]
Prior methods cannot achieve high inference speed and high accuracy in both ways.
The different properties of RGB and depth data are not well-exploited, limiting the reliability of predicted road.
We propose an uncertainty-aware symmetric network (USNet) to achieve a trade-off between speed and accuracy by fully fusing RGB and depth data.
arXiv Detail & Related papers (2022-03-09T06:11:29Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - MobileSal: Extremely Efficient RGB-D Salient Object Detection [62.04876251927581]
This paper introduces a novel network, methodname, which focuses on efficient RGB-D salient object detection (SOD)
We propose an implicit depth restoration (IDR) technique to strengthen the feature representation capability of mobile networks for RGB-D SOD.
With IDR and CPR incorporated, methodnameperforms favorably against sArt methods on seven challenging RGB-D SOD datasets.
arXiv Detail & Related papers (2020-12-24T04:36:42Z) - Depth Completion via Inductive Fusion of Planar LIDAR and Monocular
Camera [27.978780155504467]
We introduce an inductive late-fusion block which better fuses different sensor modalities inspired by a probability model.
This block uses the dense context features to guide the depth prediction based on demonstrations by sparse depth features.
Our method shows promising results compared to previous approaches on both the benchmark datasets and simulated dataset.
arXiv Detail & Related papers (2020-09-03T18:39:57Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.