Unsupervised Learning of Global Registration of Temporal Sequence of
Point Clouds
- URL: http://arxiv.org/abs/2006.12378v1
- Date: Wed, 17 Jun 2020 06:00:36 GMT
- Title: Unsupervised Learning of Global Registration of Temporal Sequence of
Point Clouds
- Authors: Lingjing Wang, Yi Shi, Xiang Li, Yi Fang
- Abstract summary: Global registration of point clouds aims to find an optimal alignment of a sequence of 2D or 3D point sets.
We present a novel method that takes advantage of current deep learning techniques for unsupervised learning of global registration from a temporal sequence of point clouds.
- Score: 16.019588704177288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Global registration of point clouds aims to find an optimal alignment of a
sequence of 2D or 3D point sets. In this paper, we present a novel method that
takes advantage of current deep learning techniques for unsupervised learning
of global registration from a temporal sequence of point clouds. Our key
novelty is that we introduce a deep Spatio-Temporal REPresentation (STREP)
feature, which describes the geometric essence of both temporal and spatial
relationship of the sequence of point clouds acquired with sensors in an
unknown environment. In contrast to the previous practice that treats each time
step (pair-wise registration) individually, our unsupervised model starts with
optimizing a sequence of latent STREP feature, which is then decoded to a
temporally and spatially continuous sequence of geometric transformations to
globally align multiple point clouds. We have evaluated our proposed approach
over both simulated 2D and real 3D datasets and the experimental results
demonstrate that our method can beat other techniques by taking into account
the temporal information in deep feature learning.
Related papers
- Bridging Domain Gap of Point Cloud Representations via Self-Supervised Geometric Augmentation [15.881442863961531]
We introduce a novel scheme for induced geometric invariance of point cloud representations across domains.
On one hand, a novel pretext task of predicting translation of distances of augmented samples is proposed to alleviate centroid shift of point clouds.
On the other hand, we pioneer an integration of the relational self-supervised learning on geometrically-augmented point clouds.
arXiv Detail & Related papers (2024-09-11T02:39:19Z) - Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - PSTNet: Point Spatio-Temporal Convolution on Point Cloud Sequences [51.53563462897779]
We propose a point-ordered (PST) convolution to achieve informative representations of point cloud sequences.
PST first disentangles space and time in point cloud sequences, then a spatial convolution is employed to capture local structure points in the 3D space, and a temporal convolution is used to model the dynamics of the spatial regions along the time dimension.
We incorporate the proposed PST convolution into a deep network, namely PSTNet, to extract features of point cloud sequences in a hierarchical manner.
arXiv Detail & Related papers (2022-05-27T02:14:43Z) - IDEA-Net: Dynamic 3D Point Cloud Interpolation via Deep Embedding
Alignment [58.8330387551499]
We formulate the problem as estimation of point-wise trajectories (i.e., smooth curves)
We propose IDEA-Net, an end-to-end deep learning framework, which disentangles the problem under the assistance of the explicitly learned temporal consistency.
We demonstrate the effectiveness of our method on various point cloud sequences and observe large improvement over state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-03-22T10:14:08Z) - Anchor-Based Spatial-Temporal Attention Convolutional Networks for
Dynamic 3D Point Cloud Sequences [20.697745449159097]
Anchor-based Spatial-Temporal Attention Convolution operation (ASTAConv) is proposed in this paper to process dynamic 3D point cloud sequences.
The proposed convolution operation builds a regular receptive field around each point by setting several virtual anchors around each point.
The proposed method makes better use of the structured information within the local region, and learn spatial-temporal embedding features from dynamic 3D point cloud sequences.
arXiv Detail & Related papers (2020-12-20T07:35:37Z) - CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations [72.4716073597902]
We propose a method to learn object Canonical Point Cloud Representations of dynamically or moving objects.
We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuoustemporal sequence reconstruction, and correspondence estimation.
arXiv Detail & Related papers (2020-08-06T17:58:48Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - Learning multiview 3D point cloud registration [74.39499501822682]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm.
Our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly.
arXiv Detail & Related papers (2020-01-15T03:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.