Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching
- URL: http://arxiv.org/abs/2303.10971v1
- Date: Mon, 20 Mar 2023 09:47:02 GMT
- Title: Self-Supervised Learning for Multimodal Non-Rigid 3D Shape Matching
- Authors: Dongliang Cao, Florian Bernard
- Abstract summary: We introduce a self-supervised multimodal learning strategy that combines mesh-based functional map regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for triangle meshes, complete point clouds, and partially observed point clouds.
We demonstrate that our method achieves state-of-the-art results on several challenging benchmark datasets.
- Score: 15.050801537501462
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The matching of 3D shapes has been extensively studied for shapes represented
as surface meshes, as well as for shapes represented as point clouds. While
point clouds are a common representation of raw real-world 3D data (e.g. from
laser scanners), meshes encode rich and expressive topological information, but
their creation typically requires some form of (often manual) curation. In
turn, methods that purely rely on point clouds are unable to meet the matching
quality of mesh-based methods that utilise the additional topological
structure. In this work we close this gap by introducing a self-supervised
multimodal learning strategy that combines mesh-based functional map
regularisation with a contrastive loss that couples mesh and point cloud data.
Our shape matching approach allows to obtain intramodal correspondences for
triangle meshes, complete point clouds, and partially observed point clouds, as
well as correspondences across these data modalities. We demonstrate that our
method achieves state-of-the-art results on several challenging benchmark
datasets even in comparison to recent supervised methods, and that our method
reaches previously unseen cross-dataset generalisation ability.
Related papers
- RealDiff: Real-world 3D Shape Completion using Self-Supervised Diffusion Models [15.209079637302905]
We propose a self-supervised framework, namely RealDiff, that formulates point cloud completion as a conditional generation problem directly on real-world measurements.
Specifically, RealDiff simulates a diffusion process at the missing object parts while conditioning the generation on the partial input to address the multimodal nature of the task.
Experimental results show that our method consistently outperforms state-of-the-art methods in real-world point cloud completion.
arXiv Detail & Related papers (2024-09-16T11:18:57Z) - Unsupervised Non-Rigid Point Cloud Matching through Large Vision Models [1.3030624795284795]
We propose a learning-based framework for non-rigid point cloud matching.
Key insight is to incorporate semantic features derived from large vision models (LVMs)
Our framework effectively leverages the structural information contained in the semantic features to address ambiguities arise from self-similarities among local geometries.
arXiv Detail & Related papers (2024-08-16T07:02:19Z) - Unsupervised Representation Learning for Diverse Deformable Shape
Collections [30.271818994854353]
We introduce a novel learning-based method for encoding and manipulating 3D surface meshes.
Our method is specifically designed to create an interpretable embedding space for deformable shape collections.
arXiv Detail & Related papers (2023-10-27T13:45:30Z) - Flow-based GAN for 3D Point Cloud Generation from a Single Image [16.04710129379503]
We introduce a hybrid explicit-implicit generative modeling scheme, which inherits the flow-based explicit generative models for sampling point clouds with arbitrary resolutions.
We evaluate on the large-scale synthetic dataset ShapeNet, with the experimental results demonstrating the superior performance of the proposed method.
arXiv Detail & Related papers (2022-10-08T17:58:20Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z) - DFC: Deep Feature Consistency for Robust Point Cloud Registration [0.4724825031148411]
We present a novel learning-based alignment network for complex alignment scenes.
We validate our approach on the 3DMatch dataset and the KITTI odometry dataset.
arXiv Detail & Related papers (2021-11-15T08:27:21Z) - Concentric Spherical GNN for 3D Representation Learning [53.45704095146161]
We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps.
Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information.
We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.
arXiv Detail & Related papers (2021-03-18T19:05:04Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z) - Shape-Oriented Convolution Neural Network for Point Cloud Analysis [59.405388577930616]
Point cloud is a principal data structure adopted for 3D geometric information encoding.
Shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point.
arXiv Detail & Related papers (2020-04-20T16:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.