Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision
- URL: http://arxiv.org/abs/2006.11481v1
- Date: Sat, 20 Jun 2020 03:11:04 GMT
- Title: Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision
- Authors: Haojie Liu, Kang Liao, Chunyu Lin, Yao Zhao and Yulan Guo
- Abstract summary: We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
- Score: 68.35777836993212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pseudo-LiDAR point cloud interpolation is a novel and challenging task in the
field of autonomous driving, which aims to address the frequency mismatching
problem between camera and LiDAR. Previous works represent the 3D spatial
motion relationship induced by a coarse 2D optical flow, and the quality of
interpolated point clouds only depends on the supervision of depth maps. As a
result, the generated point clouds suffer from inferior global distributions
and local appearances. To solve the above problems, we propose a Pseudo-LiDAR
point cloud interpolation network to generates temporally and spatially
high-quality point cloud sequences. By exploiting the scene flow between point
clouds, the proposed network is able to learn a more accurate representation of
the 3D spatial motion relationship. For the more comprehensive perception of
the distribution of point cloud, we design a novel reconstruction loss function
that implements the chamfer distance to supervise the generation of
Pseudo-LiDAR point clouds in 3D space. In addition, we introduce a multi-modal
deep aggregation module to facilitate the efficient fusion of texture and depth
features. As the benefits of the improved motion representation, training loss
function, and model structure, our approach gains significant improvements on
the Pseudo-LiDAR point cloud interpolation task. The experimental results
evaluated on KITTI dataset demonstrate the state-of-the-art performance of the
proposed network, quantitatively and qualitatively.
Related papers
- FASTC: A Fast Attentional Framework for Semantic Traversability Classification Using Point Cloud [7.711666704468952]
We address the problem of traversability assessment using point clouds.
We propose a pillar feature extraction module that utilizes PointNet to capture features from point clouds organized in vertical volume.
We then propose a newtemporal attention module to fuse multi-frame information, which can properly handle the varying density problem of LIDAR point clouds.
arXiv Detail & Related papers (2024-06-24T12:01:55Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - Spherical Interpolated Convolutional Network with Distance-Feature
Density for 3D Semantic Segmentation of Point Clouds [24.85151376535356]
Spherical interpolated convolution operator is proposed to replace the traditional grid-shaped 3D convolution operator.
The proposed method achieves good performance on the ScanNet dataset and Paris-Lille-3D dataset.
arXiv Detail & Related papers (2020-11-27T15:35:12Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.