Learning Implicit Functions for Topology-Varying Dense 3D Shape
Correspondence
- URL: http://arxiv.org/abs/2010.12320v2
- Date: Mon, 26 Oct 2020 01:22:55 GMT
- Title: Learning Implicit Functions for Topology-Varying Dense 3D Shape
Correspondence
- Authors: Feng Liu and Xiaoming Liu
- Abstract summary: The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner.
Our novel implicit function produces a part embedding vector for each 3D point.
We implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point.
- Score: 21.93671761497348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this paper is to learn dense 3D shape correspondence for
topology-varying objects in an unsupervised manner. Conventional implicit
functions estimate the occupancy of a 3D point given a shape latent code.
Instead, our novel implicit function produces a part embedding vector for each
3D point, which is assumed to be similar to its densely corresponded point in
another 3D shape of the same object category. Furthermore, we implement dense
correspondence through an inverse function mapping from the part embedding to a
corresponded 3D point. Both functions are jointly learned with several
effective loss functions to realize our assumption, together with the encoder
generating the shape latent code. During inference, if a user selects an
arbitrary point on the source shape, our algorithm can automatically generate a
confidence score indicating whether there is a correspondence on the target
shape, as well as the corresponding semantic point if there is one. Such a
mechanism inherently benefits man-made objects with different part
constitutions. The effectiveness of our approach is demonstrated through
unsupervised 3D semantic correspondence and shape segmentation.
Related papers
- Learning Implicit Functions for Dense 3D Shape Correspondence of Generic
Objects [21.93671761497348]
A novel implicit function produces a probabilistic embedding to represent each 3D point in a part embedding space.
We implement dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point.
Our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape.
arXiv Detail & Related papers (2022-12-29T11:57:47Z) - Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - Meta-Learning 3D Shape Segmentation Functions [16.119694625781992]
We introduce an auxiliary deep neural network as a meta-learner which takes as input a 3D shape and predicts the prior over the respective 3D segmentation function space.
We show in experiments that our meta-learning approach, denoted as Meta-3DSeg, leads to improvements on unsupervised 3D shape segmentation.
arXiv Detail & Related papers (2021-10-08T01:50:54Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Learning 3D Dense Correspondence via Canonical Point Autoencoder [108.20735652143787]
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category.
The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, and (b) decoding the primitive back to the original input instance shape.
arXiv Detail & Related papers (2021-07-10T15:54:48Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - SeqXY2SeqZ: Structure Learning for 3D Shapes by Sequentially Predicting
1D Occupancy Segments From 2D Coordinates [61.04823927283092]
We propose to represent 3D shapes using 2D functions, where the output of the function at each 2D location is a sequence of line segments inside the shape.
We implement this approach using a Seq2Seq model with attention, called SeqXY2SeqZ, which learns the mapping from a sequence of 2D coordinates along two arbitrary axes to a sequence of 1D locations along the third axis.
Our experiments show that SeqXY2SeqZ outperforms the state-ofthe-art methods under widely used benchmarks.
arXiv Detail & Related papers (2020-03-12T00:24:36Z) - Unsupervised Learning of Intrinsic Structural Representation Points [50.92621061405056]
Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing.
We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points.
arXiv Detail & Related papers (2020-03-03T17:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.