Learning 3D Dense Correspondence via Canonical Point Autoencoder
- URL: http://arxiv.org/abs/2107.04867v1
- Date: Sat, 10 Jul 2021 15:54:48 GMT
- Title: Learning 3D Dense Correspondence via Canonical Point Autoencoder
- Authors: An-Chieh Cheng, Xueting Li, Min Sun, Ming-Hsuan Yang, Sifei Liu
- Abstract summary: We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category.
The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, and (b) decoding the primitive back to the original input instance shape.
- Score: 108.20735652143787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a canonical point autoencoder (CPAE) that predicts dense
correspondences between 3D shapes of the same category. The autoencoder
performs two key functions: (a) encoding an arbitrarily ordered point cloud to
a canonical primitive, e.g., a sphere, and (b) decoding the primitive back to
the original input instance shape. As being placed in the bottleneck, this
primitive plays a key role to map all the unordered point clouds on the
canonical surface and to be reconstructed in an ordered fashion. Once trained,
points from different shape instances that are mapped to the same locations on
the primitive surface are determined to be a pair of correspondence. Our method
does not require any form of annotation or self-supervised part segmentation
network and can handle unaligned input point clouds. Experimental results on 3D
semantic keypoint transfer and part segmentation transfer show that our model
performs favorably against state-of-the-art correspondence learning methods.
Related papers
- Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - Learning Implicit Functions for Dense 3D Shape Correspondence of Generic
Objects [21.93671761497348]
A novel implicit function produces a probabilistic embedding to represent each 3D point in a part embedding space.
We implement dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point.
Our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape.
arXiv Detail & Related papers (2022-12-29T11:57:47Z) - SE(3)-Equivariant Attention Networks for Shape Reconstruction in
Function Space [50.14426188851305]
We propose the first SE(3)-equivariant coordinate-based network for learning occupancy fields from point clouds.
In contrast to previous shape reconstruction methods that align the input to a regular grid, we operate directly on the irregular, unoriented point cloud.
We show that our method outperforms previous SO(3)-equivariant methods, as well as non-equivariant methods trained on SO(3)-augmented datasets.
arXiv Detail & Related papers (2022-04-05T17:59:15Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Implicit Autoencoder for Point-Cloud Self-Supervised Representation
Learning [39.521374237630766]
The most popular and accessible 3D representation, i.e., point clouds, involves discrete samples of the underlying continuous 3D surface.
This discretization process introduces sampling variations on the 3D shape, making it challenging to develop transferable knowledge of the true 3D geometry.
In the standard autoencoding paradigm, the encoder is compelled to encode not only the 3D geometry but also information on the specific discrete sampling of the 3D shape into the latent code.
This is because the point cloud reconstructed by the decoder is considered unacceptable unless there is a perfect mapping between the original and the reconstructed
arXiv Detail & Related papers (2022-01-03T18:05:52Z) - Learning Implicit Functions for Topology-Varying Dense 3D Shape
Correspondence [21.93671761497348]
The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner.
Our novel implicit function produces a part embedding vector for each 3D point.
We implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point.
arXiv Detail & Related papers (2020-10-23T11:52:06Z) - PointGMM: a Neural GMM Network for Point Clouds [83.9404865744028]
Point clouds are popular representation for 3D shapes, but encode a particular sampling without accounting for shape priors or non-local information.
We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class.
We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistents between existing shapes.
arXiv Detail & Related papers (2020-03-30T10:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.