FUSE: A Flow-based Mapping Between Shapes
- URL: http://arxiv.org/abs/2511.13431v1
- Date: Mon, 17 Nov 2025 14:42:45 GMT
- Title: FUSE: A Flow-based Mapping Between Shapes
- Authors: Lorenzo Olearo, Giulio ViganĂ², Daniele Baieri, Filippo Maggioli, Simone Melzi,
- Abstract summary: We introduce a novel neural representation for maps between 3D shapes based on flow-matching models.<n>By encoding the shapes with a pointwise task-tailored embedding, this construction provides an invertible and modality-agnostic representation of maps between shapes.<n>Our framework shows promising results in other tasks, including UV mapping and registration of raw point cloud scans of human bodies.
- Score: 8.666168571342284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel neural representation for maps between 3D shapes based on flow-matching models, which is computationally efficient and supports cross-representation shape matching without large-scale training or data-driven procedures. 3D shapes are represented as the probability distribution induced by a continuous and invertible flow mapping from a fixed anchor distribution. Given a source and a target shape, the composition of the inverse flow (source to anchor) with the forward flow (anchor to target), we continuously map points between the two surfaces. By encoding the shapes with a pointwise task-tailored embedding, this construction provides an invertible and modality-agnostic representation of maps between shapes across point clouds, meshes, signed distance fields (SDFs), and volumetric data. The resulting representation consistently achieves high coverage and accuracy across diverse benchmarks and challenging settings in shape matching. Beyond shape matching, our framework shows promising results in other tasks, including UV mapping and registration of raw point cloud scans of human bodies.
Related papers
- Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - Spectral Meets Spatial: Harmonising 3D Shape Matching and Interpolation [50.376243444909136]
We present a unified framework to predict both point-wise correspondences and shape between 3D shapes.
We combine the deep functional map framework with classical surface deformation models to map shapes in both spectral and spatial domains.
arXiv Detail & Related papers (2024-02-29T07:26:23Z) - Deformation-Guided Unsupervised Non-Rigid Shape Matching [7.327850781641328]
We present an unsupervised data-driven approach for non-rigid shape matching.
Our approach is particularly robust when matching digitized shapes using 3D scanners.
arXiv Detail & Related papers (2023-11-27T09:55:55Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching [15.050801537501462]
We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
arXiv Detail & Related papers (2023-03-20T09:47:02Z) - Contour Context: Abstract Structural Distribution for 3D LiDAR Loop
Detection and Metric Pose Estimation [31.968749056155467]
This paper proposes a simple, effective, and efficient topological loop closure detection pipeline with accurate 3-DoF metric pose estimation.
We interpret the Cartesian birds' eye view (BEV) image projected from 3D LiDAR points as layered distribution of structures.
A retrieval key is designed to accelerate the search of a database indexed by layered KD-trees.
arXiv Detail & Related papers (2023-02-13T07:18:24Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z) - Instant recovery of shape from spectrum via latent space connections [33.83258865005668]
We introduce the first learning-based method for recovering shapes from Laplacian spectra.
Given an auto-encoder, our model takes the form of a cycle-consistent module to map latent vectors to sequences of eigenvalues.
Our data-driven approach replaces the need for ad-hoc regularizers required by prior methods, while providing more accurate results at a fraction of the computational cost.
arXiv Detail & Related papers (2020-03-14T00:48:34Z) - Learning 3D Human Shape and Pose from Dense Body Parts [117.46290013548533]
We propose a Decompose-and-aggregate Network (DaNet) to learn 3D human shape and pose from dense correspondences of body parts.
Messages from local streams are aggregated to enhance the robust prediction of the rotation-based poses.
Our method is validated on both indoor and real-world datasets including Human3.6M, UP3D, COCO, and 3DPW.
arXiv Detail & Related papers (2019-12-31T15:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.