Zero-Shot 3D Shape Correspondence
- URL: http://arxiv.org/abs/2306.03253v2
- Date: Wed, 27 Sep 2023 10:33:22 GMT
- Title: Zero-Shot 3D Shape Correspondence
- Authors: Ahmed Abdelreheem, Abdelrahman Eldesokey, Maks Ovsjanikov, Peter Wonka
- Abstract summary: We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
- Score: 67.18775201037732
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a novel zero-shot approach to computing correspondences between 3D
shapes. Existing approaches mainly focus on isometric and near-isometric shape
pairs (e.g., human vs. human), but less attention has been given to strongly
non-isometric and inter-class shape matching (e.g., human vs. cow). To this
end, we introduce a fully automatic method that exploits the exceptional
reasoning capabilities of recent foundation models in language and vision to
tackle difficult shape correspondence problems. Our approach comprises multiple
stages. First, we classify the 3D shapes in a zero-shot manner by feeding
rendered shape views to a language-vision model (e.g., BLIP2) to generate a
list of class proposals per shape. These proposals are unified into a single
class per shape by employing the reasoning capabilities of ChatGPT. Second, we
attempt to segment the two shapes in a zero-shot manner, but in contrast to the
co-segmentation problem, we do not require a mutual set of semantic regions.
Instead, we propose to exploit the in-context learning capabilities of ChatGPT
to generate two different sets of semantic regions for each shape and a
semantic mapping between them. This enables our approach to match strongly
non-isometric shapes with significant differences in geometric structure.
Finally, we employ the generated semantic mapping to produce coarse
correspondences that can further be refined by the functional maps framework to
produce dense point-to-point maps. Our approach, despite its simplicity,
produces highly plausible results in a zero-shot manner, especially between
strongly non-isometric shapes. Project webpage:
https://samir55.github.io/3dshapematch/.
Related papers
- Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Geometrically-driven Aggregation for Zero-shot 3D Point Cloud Understanding [11.416392706435415]
Zero-shot 3D point cloud understanding can be achieved via 2D Vision-Language Models (VLMs)
Existing strategies directly map Vision-Language Models from 2D pixels of rendered or captured views to 3D points, overlooking the inherent and expressible point cloud geometric structure.
We introduce the first training-free aggregation technique that leverages the point cloud's 3D geometric structure to improve the quality of the transferred Vision-Language Models.
arXiv Detail & Related papers (2023-12-04T12:30:07Z) - Unsupervised Representation Learning for Diverse Deformable Shape
Collections [30.271818994854353]
We introduce a novel learning-based method for encoding and manipulating 3D surface meshes.
Our method is specifically designed to create an interpretable embedding space for deformable shape collections.
arXiv Detail & Related papers (2023-10-27T13:45:30Z) - Geometrically Consistent Partial Shape Matching [50.29468769172704]
Finding correspondences between 3D shapes is a crucial problem in computer vision and graphics.
An often neglected but essential property of matching geometrics is consistency.
We propose a novel integer linear programming partial shape matching formulation.
arXiv Detail & Related papers (2023-09-10T12:21:42Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Topologically-Aware Deformation Fields for Single-View 3D Reconstruction [30.738926104317514]
We present a new framework for learning 3D object shapes and dense cross-object 3D correspondences from just an unaligned category-specific image collection.
The 3D shapes are generated implicitly as deformations to a category-specific signed distance field.
Our approach, dubbed TARS, achieves state-of-the-art reconstruction fidelity on several datasets.
arXiv Detail & Related papers (2022-05-12T17:59:59Z) - Learning 3D Dense Correspondence via Canonical Point Autoencoder [108.20735652143787]
We propose a canonical point autoencoder (CPAE) that predicts dense correspondences between 3D shapes of the same category.
The autoencoder performs two key functions: (a) encoding an arbitrarily ordered point cloud to a canonical primitive, and (b) decoding the primitive back to the original input instance shape.
arXiv Detail & Related papers (2021-07-10T15:54:48Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.