Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth
Estimation
- URL: http://arxiv.org/abs/2103.12964v1
- Date: Wed, 24 Mar 2021 03:24:46 GMT
- Title: Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth
Estimation
- Authors: Jaesung Choe, Kyungdon Joo, Tooba Imtiaz, In So Kweon
- Abstract summary: We propose a geometry-aware stereo-LiDAR fusion network for long-range depth estimation.
We exploit sparse and accurate point clouds as a cue for guiding correspondences of stereo images in a unified 3D volume space.
Our network achieves state-of-the-art performance on the KITTI and the Virtual- KITTI datasets.
- Score: 81.08111209632501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stereo-LiDAR fusion is a promising task in that we can utilize two different
types of 3D perceptions for practical usage -- dense 3D information (stereo
cameras) and highly-accurate sparse point clouds (LiDAR). However, due to their
different modalities and structures, the method of aligning sensor data is the
key for successful sensor fusion. To this end, we propose a geometry-aware
stereo-LiDAR fusion network for long-range depth estimation, called volumetric
propagation network. The key idea of our network is to exploit sparse and
accurate point clouds as a cue for guiding correspondences of stereo images in
a unified 3D volume space. Unlike existing fusion strategies, we directly embed
point clouds into the volume, which enables us to propagate valid information
into nearby voxels in the volume, and to reduce the uncertainty of
correspondences. Thus, it allows us to fuse two different input modalities
seamlessly and regress a long-range depth map. Our fusion is further enhanced
by a newly proposed feature extraction layer for point clouds guided by images:
FusionConv. FusionConv extracts point cloud features that consider both
semantic (2D image domain) and geometric (3D domain) relations and aid fusion
at the volume. Our network achieves state-of-the-art performance on the KITTI
and the Virtual-KITTI datasets among recent stereo-LiDAR fusion methods.
Related papers
- FFPA-Net: Efficient Feature Fusion with Projection Awareness for 3D
Object Detection [19.419030878019974]
unstructured 3D point clouds are filled in the 2D plane and 3D point cloud features are extracted faster using projection-aware convolution layers.
The corresponding indexes between different sensor signals are established in advance in the data preprocessing.
Two new plug-and-play fusion modules, LiCamFuse and BiLiCamFuse, are proposed.
arXiv Detail & Related papers (2022-09-15T16:13:19Z) - DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection [83.18142309597984]
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving.
We develop a family of generic multi-modal 3D detection models named DeepFusion, which is more accurate than previous methods.
arXiv Detail & Related papers (2022-03-15T18:46:06Z) - VPFNet: Improving 3D Object Detection with Virtual Point based LiDAR and
Stereo Data Fusion [62.24001258298076]
VPFNet is a new architecture that cleverly aligns and aggregates the point cloud and image data at the virtual' points.
Our VPFNet achieves 83.21% moderate 3D AP and 91.86% moderate BEV AP on the KITTI test set, ranking the 1st since May 21th, 2021.
arXiv Detail & Related papers (2021-11-29T08:51:20Z) - Frustum Fusion: Pseudo-LiDAR and LiDAR Fusion for 3D Detection [0.0]
We propose a novel data fusion algorithm to combine accurate point clouds with dense but less accurate point clouds obtained from stereo pairs.
We train multiple 3D object detection methods and show that our fusion strategy consistently improves the performance of detectors.
arXiv Detail & Related papers (2021-11-08T19:29:59Z) - MBDF-Net: Multi-Branch Deep Fusion Network for 3D Object Detection [17.295359521427073]
We propose a Multi-Branch Deep Fusion Network (MBDF-Net) for 3D object detection.
In the first stage, our multi-branch feature extraction network utilizes Adaptive Attention Fusion modules to produce cross-modal fusion features from single-modal semantic features.
In the second stage, we use a region of interest (RoI) -pooled fusion module to generate enhanced local features for refinement.
arXiv Detail & Related papers (2021-08-29T15:40:15Z) - Similarity-Aware Fusion Network for 3D Semantic Segmentation [87.51314162700315]
We propose a similarity-aware fusion network (SAFNet) to adaptively fuse 2D images and 3D point clouds for 3D semantic segmentation.
We employ a late fusion strategy where we first learn the geometric and contextual similarities between the input and back-projected (from 2D pixels) point clouds.
We show that SAFNet significantly outperforms existing state-of-the-art fusion-based approaches across various data integrity.
arXiv Detail & Related papers (2021-07-04T09:28:18Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - Stereo RGB and Deeper LIDAR Based Network for 3D Object Detection [40.34710686994996]
3D object detection has become an emerging task in autonomous driving scenarios.
Previous works process 3D point clouds using either projection-based or voxel-based models.
We propose the Stereo RGB and Deeper LIDAR framework which can utilize semantic and spatial information simultaneously.
arXiv Detail & Related papers (2020-06-09T11:19:24Z) - RoutedFusion: Learning Real-time Depth Map Fusion [73.0378509030908]
We present a novel real-time capable machine learning-based method for depth map fusion.
We propose a neural network that predicts non-linear updates to better account for typical fusion errors.
Our network is composed of a 2D depth routing network and a 3D depth fusion network which efficiently handle sensor-specific noise and outliers.
arXiv Detail & Related papers (2020-01-13T16:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.