Unsupervised Dense Deformation Embedding Network for Template-Free Shape
Correspondence
- URL: http://arxiv.org/abs/2108.11609v1
- Date: Thu, 26 Aug 2021 07:07:19 GMT
- Title: Unsupervised Dense Deformation Embedding Network for Template-Free Shape
Correspondence
- Authors: Ronghan Chen, Yang Cong, Jiahua Dong
- Abstract summary: Current deep learning based methods require the supervision of dense annotations to learn per-point translations.
We develop a new Unsupervised Deformation Embedding Network (i.e., UD2E-Net), which learns to predict deformations between non-rigid shapes from dense local features.
Our UD2E-Net outperforms state-of-the-art unsupervised methods by 24% on Faust Inter challenge and even supervised methods by 13% on Faust Intra challenge.
- Score: 18.48814403488283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape correspondence from 3D deformation learning has attracted appealing
academy interests recently. Nevertheless, current deep learning based methods
require the supervision of dense annotations to learn per-point translations,
which severely overparameterize the deformation process. Moreover, they fail to
capture local geometric details of original shape via global feature embedding.
To address these challenges, we develop a new Unsupervised Dense Deformation
Embedding Network (i.e., UD^2E-Net), which learns to predict deformations
between non-rigid shapes from dense local features. Since it is non-trivial to
match deformation-variant local features for deformation prediction, we develop
an Extrinsic-Intrinsic Autoencoder to frst encode extrinsic geometric features
from source into intrinsic coordinates in a shared canonical shape, with which
the decoder then synthesizes corresponding target features. Moreover, a bounded
maximum mean discrepancy loss is developed to mitigate the distribution
divergence between the synthesized and original features. To learn natural
deformation without dense supervision, we introduce a coarse parameterized
deformation graph, for which a novel trace and propagation algorithm is
proposed to improve both the quality and effciency of the deformation. Our
UD^2E-Net outperforms state-of-the-art unsupervised methods by 24% on Faust
Inter challenge and even supervised methods by 13% on Faust Intra challenge.
Related papers
- Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - TextDeformer: Geometry Manipulation using Text Guidance [37.02412892926677]
We present a technique for producing a deformation of an input triangle mesh guided solely by a text prompt.
Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO.
To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner.
arXiv Detail & Related papers (2023-04-26T07:38:41Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - 3D Unsupervised Region-Aware Registration Transformer [13.137287695912633]
Learning robust point cloud registration models with deep neural networks has emerged as a powerful paradigm.
We propose a new design of 3D region partition module that is able to divide the input shape to different regions with a self-supervised 3D shape reconstruction loss.
Our experiments show that our 3D-URRT achieves superior registration performance over various benchmark datasets.
arXiv Detail & Related papers (2021-10-07T15:06:52Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - CorrNet3D: Unsupervised End-to-end Learning of Dense Correspondence for
3D Point Clouds [48.22275177437932]
This paper addresses the problem of computing dense correspondence between 3D shapes in the form of point clouds.
We propose CorrNet3D -- the first unsupervised and end-to-end deep learning-based framework.
arXiv Detail & Related papers (2020-12-31T14:55:51Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.