Correspondence Learning via Linearly-invariant Embedding
- URL: http://arxiv.org/abs/2010.13136v1
- Date: Sun, 25 Oct 2020 15:31:53 GMT
- Title: Correspondence Learning via Linearly-invariant Embedding
- Authors: Riccardo Marin, Marie-Julie Rakotosaona, Simone Melzi, Maks Ovsjanikov
- Abstract summary: We show that learning the basis from data can both improve robustness and lead to better accuracy in challenging settings.
We demonstrate that our approach achieves state-of-the-art results in challenging non-rigid 3D point cloud correspondence applications.
- Score: 40.07515336866026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a fully differentiable pipeline for estimating
accurate dense correspondences between 3D point clouds. The proposed pipeline
is an extension and a generalization of the functional maps framework. However,
instead of using the Laplace-Beltrami eigenfunctions as done in virtually all
previous works in this domain, we demonstrate that learning the basis from data
can both improve robustness and lead to better accuracy in challenging
settings. We interpret the basis as a learned embedding into a higher
dimensional space. Following the functional map paradigm the optimal
transformation in this embedding space must be linear and we propose a separate
architecture aimed at estimating the transformation by learning optimal
descriptor functions. This leads to the first end-to-end trainable functional
map-based correspondence approach in which both the basis and the descriptors
are learned from data. Interestingly, we also observe that learning a
\emph{canonical} embedding leads to worse results, suggesting that leaving an
extra linear degree of freedom to the embedding network gives it more
robustness, thereby also shedding light onto the success of previous methods.
Finally, we demonstrate that our approach achieves state-of-the-art results in
challenging non-rigid 3D point cloud correspondence applications.
Related papers
- Memory-Scalable and Simplified Functional Map Learning [32.088809326158554]
We introduce a novel memory-scalable and efficient functional map learning pipeline.
By leveraging the structure of functional maps, we offer the possibility to achieve identical results without ever storing the pointwise map in memory.
Unlike many functional map learning methods, which use this algorithm at a post-processing step, ours can be easily used at train time.
arXiv Detail & Related papers (2024-03-30T12:01:04Z) - IPoD: Implicit Field Learning with Point Diffusion for Generalizable 3D Object Reconstruction from Single RGB-D Images [50.4538089115248]
Generalizable 3D object reconstruction from single-view RGB-D images remains a challenging task.
We propose a novel approach, IPoD, which harmonizes implicit field learning with point diffusion.
Experiments conducted on the CO3D-v2 dataset affirm the superiority of IPoD, achieving 7.8% improvement in F-score and 28.6% in Chamfer distance over existing methods.
arXiv Detail & Related papers (2024-03-30T07:17:37Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Neural Jacobian Fields: Learning Intrinsic Mappings of Arbitrary Meshes [38.157373733083894]
This paper introduces a framework designed to accurately predict piecewise linear mappings of arbitrary meshes via a neural network.
The framework is based on reducing the neural aspect to a prediction of a matrix for a single point, conditioned on a global shape descriptor.
By operating in the intrinsic gradient domain of each individual mesh, it allows the framework to predict highly-accurate mappings.
arXiv Detail & Related papers (2022-05-05T19:51:13Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Learning Canonical Embedding for Non-rigid Shape Matching [36.85782408336389]
This paper provides a novel framework that learns canonical embeddings for non-rigid shape matching.
Our framework is trained end-to-end and thus avoids instabilities and constraints associated with the commonly-used Laplace-Beltrami basis.
arXiv Detail & Related papers (2021-10-06T18:09:13Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Deep Geometric Functional Maps: Robust Feature Learning for Shape
Correspondence [31.840880075039944]
We present a novel learning-based approach for computing correspondences between non-rigid 3D shapes.
Key to our method is a feature-extraction network that learns directly from raw shape geometry.
arXiv Detail & Related papers (2020-03-31T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.