Learning Implicit Functions for Dense 3D Shape Correspondence of Generic
Objects
- URL: http://arxiv.org/abs/2212.14276v1
- Date: Thu, 29 Dec 2022 11:57:47 GMT
- Title: Learning Implicit Functions for Dense 3D Shape Correspondence of Generic
Objects
- Authors: Feng Liu and Xiaoming Liu
- Abstract summary: A novel implicit function produces a probabilistic embedding to represent each 3D point in a part embedding space.
We implement dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point.
Our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape.
- Score: 21.93671761497348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The objective of this paper is to learn dense 3D shape correspondence for
topology-varying generic objects in an unsupervised manner. Conventional
implicit functions estimate the occupancy of a 3D point given a shape latent
code. Instead, our novel implicit function produces a probabilistic embedding
to represent each 3D point in a part embedding space. Assuming the
corresponding points are similar in the embedding space, we implement dense
correspondence through an inverse function mapping from the part embedding
vector to a corresponded 3D point. Both functions are jointly learned with
several effective and uncertainty-aware loss functions to realize our
assumption, together with the encoder generating the shape latent code. During
inference, if a user selects an arbitrary point on the source shape, our
algorithm can automatically generate a confidence score indicating whether
there is a correspondence on the target shape, as well as the corresponding
semantic point if there is one. Such a mechanism inherently benefits man-made
objects with different part constitutions. The effectiveness of our approach is
demonstrated through unsupervised 3D semantic correspondence and shape
segmentation.
Related papers
- Neural Correspondence Field for Object Pose Estimation [67.96767010122633]
We propose a method for estimating the 6DoF pose of a rigid object with an available 3D model from a single RGB image.
Unlike classical correspondence-based methods which predict 3D object coordinates at pixels of the input image, the proposed method predicts 3D object coordinates at 3D query points sampled in the camera frustum.
arXiv Detail & Related papers (2022-07-30T01:48:23Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Learning 3D Dense Correspondence via Canonical Point Autoencoder [108.20735652143787]
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category.
The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, and (b) decoding the primitive back to the original input instance shape.
arXiv Detail & Related papers (2021-07-10T15:54:48Z) - VIN: Voxel-based Implicit Network for Joint 3D Object Detection and
Segmentation for Lidars [12.343333815270402]
A unified neural network structure is presented for joint 3D object detection and point cloud segmentation.
We leverage rich supervision from both detection and segmentation labels rather than using just one of them.
arXiv Detail & Related papers (2021-07-07T02:16:20Z) - HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object
Detection [39.64891219500416]
3D object detection methods exploit either voxel-based or point-based features to represent 3D objects in a scene.
We introduce in this paper a novel single-stage 3D detection method having the merit of both voxel-based and point-based features.
arXiv Detail & Related papers (2021-04-02T06:34:49Z) - Learning Implicit Functions for Topology-Varying Dense 3D Shape
Correspondence [21.93671761497348]
The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner.
Our novel implicit function produces a part embedding vector for each 3D point.
We implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point.
arXiv Detail & Related papers (2020-10-23T11:52:06Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Object-Centric Multi-View Aggregation [86.94544275235454]
We present an approach for aggregating a sparse set of views of an object in order to compute a semi-implicit 3D representation in the form of a volumetric feature grid.
Key to our approach is an object-centric canonical 3D coordinate system into which views can be lifted, without explicit camera pose estimation.
We show that computing a symmetry-aware mapping from pixels to the canonical coordinate system allows us to better propagate information to unseen regions.
arXiv Detail & Related papers (2020-07-20T17:38:31Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.