DiffuMatch: Category-Agnostic Spectral Diffusion Priors for Robust Non-rigid Shape Matching
- URL: http://arxiv.org/abs/2507.23715v1
- Date: Thu, 31 Jul 2025 16:44:54 GMT
- Title: DiffuMatch: Category-Agnostic Spectral Diffusion Priors for Robust Non-rigid Shape Matching
- Authors: Emery Pierson, Lei Li, Angela Dai, Maks Ovsjanikov,
- Abstract summary: We show that both in-network regularization and functional map training can be replaced with data-driven methods.<n>We first train a generative model of functional maps in the spectral domain using score-based generative modeling.<n>We then exploit the resulting model to promote the structural properties of ground truth functional maps on new shape collections.
- Score: 53.39693288324375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep functional maps have recently emerged as a powerful tool for solving non-rigid shape correspondence tasks. Methods that use this approach combine the power and flexibility of the functional map framework, with data-driven learning for improved accuracy and generality. However, most existing methods in this area restrict the learning aspect only to the feature functions and still rely on axiomatic modeling for formulating the training loss or for functional map regularization inside the networks. This limits both the accuracy and the applicability of the resulting approaches only to scenarios where assumptions of the axiomatic models hold. In this work, we show, for the first time, that both in-network regularization and functional map training can be replaced with data-driven methods. For this, we first train a generative model of functional maps in the spectral domain using score-based generative modeling, built from a large collection of high-quality maps. We then exploit the resulting model to promote the structural properties of ground truth functional maps on new shape collections. Remarkably, we demonstrate that the learned models are category-agnostic, and can fully replace commonly used strategies such as enforcing Laplacian commutativity or orthogonality of functional maps. Our key technical contribution is a novel distillation strategy from diffusion models in the spectral domain. Experiments demonstrate that our learned regularization leads to better results than axiomatic approaches for zero-shot non-rigid shape matching. Our code is available at: https://github.com/daidedou/diffumatch/
Related papers
- Memory-Scalable and Simplified Functional Map Learning [32.088809326158554]
We introduce a novel memory-scalable and efficient functional map learning pipeline.
By leveraging the structure of functional maps, we offer the possibility to achieve identical results without ever storing the pointwise map in memory.
Unlike many functional map learning methods, which use this algorithm at a post-processing step, ours can be easily used at train time.
arXiv Detail & Related papers (2024-03-30T12:01:04Z) - Revisiting Map Relations for Unsupervised Non-Rigid Shape Matching [18.957179015912402]
We propose a novel unsupervised learning approach for non-rigid 3D shape matching.
We show that our method substantially outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2023-10-17T17:28:03Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Unsupervised Learning of Robust Spectral Shape Matching [12.740151710302397]
We propose a novel learning-based approach for robust 3D shape matching.
Our method builds upon deep functional maps and can be trained in a fully unsupervised manner.
arXiv Detail & Related papers (2023-04-27T02:12:47Z) - Understanding and Improving Features Learned in Deep Functional Maps [31.61255365182462]
We show that features learned within deep functional map approaches can be used as point-wise descriptors across different shapes.
We propose effective modifications to the standard deep functional map pipeline, which promote structural properties of learned features.
arXiv Detail & Related papers (2023-03-29T08:32:16Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Neural Eigenfunctions Are Structured Representation Learners [93.53445940137618]
This paper introduces a structured, adaptive-length deep representation called Neural Eigenmap.
We show that, when the eigenfunction is derived from positive relations in a data augmentation setup, applying NeuralEF results in an objective function.
We demonstrate using such representations as adaptive-length codes in image retrieval systems.
arXiv Detail & Related papers (2022-10-23T07:17:55Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Graph Sampling Based Deep Metric Learning for Generalizable Person
Re-Identification [114.56752624945142]
We argue that the most popular random sampling method, the well-known PK sampler, is not informative and efficient for deep metric learning.
We propose an efficient mini batch sampling method called Graph Sampling (GS) for large-scale metric learning.
arXiv Detail & Related papers (2021-04-04T06:44:15Z) - Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances [55.64521598173897]
This paper tries to train a small-scale model, which could be repetitively used to build heat maps for the traveling salesman problem (TSP)
Heat maps are fed into a reinforcement learning approach (Monte Carlo tree search) to guide the search of high-quality solutions.
Experimental results show that, this new approach clearly outperforms the existing machine learning based TSP algorithms.
arXiv Detail & Related papers (2020-12-19T11:06:30Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Deep Geometric Functional Maps: Robust Feature Learning for Shape
Correspondence [31.840880075039944]
We present a novel learning-based approach for computing correspondences between non-rigid 3D shapes.
Key to our method is a feature-extraction network that learns directly from raw shape geometry.
arXiv Detail & Related papers (2020-03-31T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.