Temporal Point Cloud Completion with Pose Disturbance
- URL: http://arxiv.org/abs/2202.03084v1
- Date: Mon, 7 Feb 2022 11:41:12 GMT
- Title: Temporal Point Cloud Completion with Pose Disturbance
- Authors: Jieqi Shi, Lingyun Xu, Peiliang Li, Xiaozhi Chen and Shaojie Shen
- Abstract summary: We provide complete point clouds from sparse input with pose disturbance by limited translation and rotation.
We also use temporal information to enhance the completion model, refining the output with a sequence of inputs.
Our framework is the first to utilize temporal information and ensure temporal consistency with limited transformation.
- Score: 32.536545112645335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds collected by real-world sensors are always unaligned and sparse,
which makes it hard to reconstruct the complete shape of object from a single
frame of data. In this work, we manage to provide complete point clouds from
sparse input with pose disturbance by limited translation and rotation. We also
use temporal information to enhance the completion model, refining the output
with a sequence of inputs. With the help of gated recovery units(GRU) and
attention mechanisms as temporal units, we propose a point cloud completion
framework that accepts a sequence of unaligned and sparse inputs, and outputs
consistent and aligned point clouds. Our network performs in an online manner
and presents a refined point cloud for each frame, which enables it to be
integrated into any SLAM or reconstruction pipeline. As far as we know, our
framework is the first to utilize temporal information and ensure temporal
consistency with limited transformation. Through experiments in ShapeNet and
KITTI, we prove that our framework is effective in both synthetic and
real-world datasets.
Related papers
- GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - Zero-shot Point Cloud Completion Via 2D Priors [52.72867922938023]
3D point cloud completion is designed to recover complete shapes from partially observed point clouds.
We propose a zero-shot framework aimed at completing partially observed point clouds across any unseen categories.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - P2C: Self-Supervised Point Cloud Completion from Single Partial Clouds [44.02541315496045]
Point cloud completion aims to recover the complete shape based on a partial observation.
Existing methods require either complete point clouds or multiple partial observations of the same object for learning.
We present Partial2Complete, the first self-supervised framework that completes point cloud objects.
arXiv Detail & Related papers (2023-07-27T09:31:01Z) - Learning a Structured Latent Space for Unsupervised Point Cloud
Completion [48.79411151132766]
We propose a novel framework, which learns a unified and structured latent space that encoding both partial and complete point clouds.
Our proposed method consistently outperforms state-of-the-art unsupervised methods on both synthetic ShapeNet and real-world KITTI, ScanNet, and Matterport3D datasets.
arXiv Detail & Related papers (2022-03-29T13:58:44Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - Cascaded Refinement Network for Point Cloud Completion with
Self-supervision [74.80746431691938]
We introduce a two-branch network for shape completion.
The first branch is a cascaded shape completion sub-network to synthesize complete objects.
The second branch is an auto-encoder to reconstruct the original partial input.
arXiv Detail & Related papers (2020-10-17T04:56:22Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Point Cloud Completion by Skip-attention Network with Hierarchical
Folding [61.59710288271434]
We propose Skip-Attention Network (SA-Net) for 3D point cloud completion.
First, we propose a skip-attention mechanism to effectively exploit the local structure details of incomplete point clouds.
Second, in order to fully utilize the selected geometric information encoded by skip-attention mechanism at different resolutions, we propose a novel structure-preserving decoder.
arXiv Detail & Related papers (2020-05-08T06:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.