NFR: Neural Feature-Guided Non-Rigid Shape Registration
- URL: http://arxiv.org/abs/2505.22445v1
- Date: Wed, 28 May 2025 15:08:49 GMT
- Title: NFR: Neural Feature-Guided Non-Rigid Shape Registration
- Authors: Puhua Jiang, Zhangquan Chen, Mingze Sun, Ruqi Huang,
- Abstract summary: Our key insight is to incorporate neural features learned by deep learning-based shape matching networks into an iterative, geometric shape registration pipeline.<n>Our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching and partial shape matching.
- Score: 1.5677990844097902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel learning-based framework for 3D shape registration, which overcomes the challenges of significant non-rigid deformation and partiality undergoing among input shapes, and, remarkably, requires no correspondence annotation during training. Our key insight is to incorporate neural features learned by deep learning-based shape matching networks into an iterative, geometric shape registration pipeline. The advantage of our approach is two-fold -- On one hand, neural features provide more accurate and semantically meaningful correspondence estimation than spatial features (e.g., coordinates), which is critical in the presence of large non-rigid deformations; On the other hand, the correspondences are dynamically updated according to the intermediate registrations and filtered by consistency prior, which prominently robustify the overall pipeline. Empirical results show that, with as few as dozens of training shapes of limited variability, our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching and partial shape matching across varying settings, but also delivers high-quality correspondences between unseen challenging shape pairs that undergo both significant extrinsic and intrinsic deformations, in which case neither traditional registration methods nor intrinsic methods work.
Related papers
- DV-Matcher: Deformation-based Non-Rigid Point Cloud Matching Guided by Pre-trained Visual Features [1.3030624795284795]
We present DV-Matcher, a learning-based framework for estimating dense correspondences between non-rigidly deformable point clouds.<n> Experimental results show that our method achieves state-of-the-art results in matching non-rigid point clouds in both near-isometric and heterogeneous shape collection.
arXiv Detail & Related papers (2024-08-16T07:02:19Z) - Non-Rigid Shape Registration via Deep Functional Maps Prior [1.9249120068573227]
We propose a learning-based framework for non-rigid shape registration without correspondence supervision.
We deform source mesh towards the target point cloud, guided by correspondences induced by high-dimensional embeddings.
Our pipeline achieves state-of-the-art results on several benchmarks of non-rigid point cloud matching.
arXiv Detail & Related papers (2023-11-08T06:52:57Z) - Implicit field supervision for robust non-rigid shape matching [29.7672368261038]
Establishing a correspondence between two non-rigidly deforming shapes is one of the most fundamental problems in visual computing.
We introduce an approach based on auto-decoder framework, that learns a continuous shape-wise deformation field over a fixed template.
Our method is remarkably robust in the presence of strong artefacts and can be generalised to arbitrary shape categories.
arXiv Detail & Related papers (2022-03-15T07:22:52Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Unsupervised Dense Deformation Embedding Network for Template-Free Shape
Correspondence [18.48814403488283]
Current deep learning based methods require the supervision of dense annotations to learn per-point translations.
We develop a new Unsupervised Deformation Embedding Network (i.e., UD2E-Net), which learns to predict deformations between non-rigid shapes from dense local features.
Our UD2E-Net outperforms state-of-the-art unsupervised methods by 24% on Faust Inter challenge and even supervised methods by 13% on Faust Intra challenge.
arXiv Detail & Related papers (2021-08-26T07:07:19Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z) - Hamiltonian Dynamics for Real-World Shape Interpolation [66.47407593823208]
We revisit the classical problem of 3D shape and propose a novel, physically plausible approach based on Hamiltonian dynamics.
Our method yields exactly volume preserving intermediate shapes, avoids self-intersections and is scalable to high resolution scans.
arXiv Detail & Related papers (2020-04-10T18:38:52Z) - Deep Semantic Matching with Foreground Detection and Cycle-Consistency [103.22976097225457]
We address weakly supervised semantic matching based on a deep network.
We explicitly estimate the foreground regions to suppress the effect of background clutter.
We develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent.
arXiv Detail & Related papers (2020-03-31T22:38:09Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.