Unsupervised Deep Multi-Shape Matching
- URL: http://arxiv.org/abs/2207.09610v1
- Date: Wed, 20 Jul 2022 01:22:08 GMT
- Title: Unsupervised Deep Multi-Shape Matching
- Authors: Dongliang Cao, Florian Bernard
- Abstract summary: 3D shape matching is a long-standing problem in computer vision and computer graphics.
We present a novel approach for deep multi-shape matching that ensures cycle-consistent multi-matchings.
Our method achieves state-of-the-art results on several challenging benchmark datasets.
- Score: 15.050801537501462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D shape matching is a long-standing problem in computer vision and computer
graphics. While deep neural networks were shown to lead to state-of-the-art
results in shape matching, existing learning-based approaches are limited in
the context of multi-shape matching: (i) either they focus on matching pairs of
shapes only and thus suffer from cycle-inconsistent multi-matchings, or (ii)
they require an explicit template shape to address the matching of a collection
of shapes. In this paper, we present a novel approach for deep multi-shape
matching that ensures cycle-consistent multi-matchings while not depending on
an explicit template shape. To this end, we utilise a shape-to-universe
multi-matching representation that we combine with powerful functional map
regularisation, so that our multi-shape matching neural network can be trained
in a fully unsupervised manner. While the functional map regularisation is only
considered during training time, functional maps are not computed for
predicting correspondences, thereby allowing for fast inference. We demonstrate
that our method achieves state-of-the-art results on several challenging
benchmark datasets, and, most remarkably, that our unsupervised method even
outperforms recent supervised methods.
Related papers
- Beyond Complete Shapes: A quantitative Evaluation of 3D Shape Matching Algorithms [41.95394677818476]
Finding correspondences between 3D shapes is an important problem in computer vision, graphics and beyond.
We provide a generic and flexible framework for the procedural generation of challenging partial shape matching scenarios.
We manually create cross-dataset correspondences between seven existing (complete geometry) shape matching datasets, leading to a total of 2543 shapes.
arXiv Detail & Related papers (2024-11-05T21:08:19Z) - Spectral Meets Spatial: Harmonising 3D Shape Matching and Interpolation [50.376243444909136]
We present a unified framework to predict both point-wise correspondences and shape between 3D shapes.
We combine the deep functional map framework with classical surface deformation models to map shapes in both spectral and spatial domains.
arXiv Detail & Related papers (2024-02-29T07:26:23Z) - Geometrically Consistent Partial Shape Matching [50.29468769172704]
Finding correspondences between 3D shapes is a crucial problem in computer vision and graphics.
An often neglected but essential property of matching geometrics is consistency.
We propose a novel integer linear programming partial shape matching formulation.
arXiv Detail & Related papers (2023-09-10T12:21:42Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching [15.050801537501462]
We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
arXiv Detail & Related papers (2023-03-20T09:47:02Z) - NCP: Neural Correspondence Prior for Effective Unsupervised Shape
Matching [31.61255365182462]
We present Neural Correspondence Prior (NCP), a new paradigm for computing correspondences between 3D shapes.
Our approach is fully unsupervised and can lead to high-quality correspondences even in challenging cases.
We show that NCP is data-efficient, fast, and state-of-the-art results on many tasks.
arXiv Detail & Related papers (2023-01-14T07:22:18Z) - G-MSM: Unsupervised Multi-Shape Matching with Graph-based Affinity
Priors [52.646396621449]
G-MSM is a novel unsupervised learning approach for non-rigid shape correspondence.
We construct an affinity graph on a given set of training shapes in a self-supervised manner.
We demonstrate state-of-the-art performance on several recent shape correspondence benchmarks.
arXiv Detail & Related papers (2022-12-06T12:09:24Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Isometric Multi-Shape Matching [50.86135294068138]
Finding correspondences between shapes is a fundamental problem in computer vision and graphics.
While isometries are often studied in shape correspondence problems, they have not been considered explicitly in the multi-matching setting.
We present a suitable optimisation algorithm for solving our formulation and provide a convergence and complexity analysis.
arXiv Detail & Related papers (2020-12-04T15:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.