MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point
Contrastive Learning
- URL: http://arxiv.org/abs/2304.13819v2
- Date: Wed, 11 Oct 2023 18:39:59 GMT
- Title: MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point
Contrastive Learning
- Authors: Jiaze Sun, Zhixiang Chen, Tae-Kyun Kim
- Abstract summary: 3D pose transfer is a challenging generation task that aims to transfer the pose of a source geometry onto a target geometry with the target identity preserved.
Current pose transfer methods allow end-to-end correspondence learning but require the desired final output as ground truth for supervision.
We present a novel self-supervised framework for 3D pose transfer which can be trained in unsupervised, semi-supervised, or fully supervised settings.
- Score: 32.97354536302333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D pose transfer is a challenging generation task that aims to transfer the
pose of a source geometry onto a target geometry with the target identity
preserved. Many prior methods require keypoint annotations to find
correspondence between the source and target. Current pose transfer methods
allow end-to-end correspondence learning but require the desired final output
as ground truth for supervision. Unsupervised methods have been proposed for
graph convolutional models but they require ground truth correspondence between
the source and target inputs. We present a novel self-supervised framework for
3D pose transfer which can be trained in unsupervised, semi-supervised, or
fully supervised settings without any correspondence labels. We introduce two
contrastive learning constraints in the latent space: a mesh-level loss for
disentangling global patterns including pose and identity, and a point-level
loss for discriminating local semantics. We demonstrate quantitatively and
qualitatively that our method achieves state-of-the-art results in supervised
3D pose transfer, with comparable results in unsupervised and semi-supervised
settings. Our method is also generalisable to unseen human and animal data with
complex topologies.
Related papers
- Weakly-supervised 3D Pose Transfer with Keypoints [57.66991032263699]
Main challenges of 3D pose transfer are: 1) Lack of paired training data with different characters performing the same pose; 2) Disentangling pose and shape information from the target mesh; 3) Difficulty in applying to meshes with different topologies.
We propose a novel weakly-supervised keypoint-based framework to overcome these difficulties.
arXiv Detail & Related papers (2023-07-25T12:40:24Z) - Unsupervised 3D Pose Transfer with Cross Consistency and Dual
Reconstruction [50.94171353583328]
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information.
Deep learning-based methods improved the efficiency and performance of 3D pose transfer.
We present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer.
arXiv Detail & Related papers (2022-11-18T15:09:56Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Non-Local Latent Relation Distillation for Self-Adaptive 3D Human Pose
Estimation [63.199549837604444]
3D human pose estimation approaches leverage different forms of strong (2D/3D pose) or weak (multi-view or depth) paired supervision.
We cast 3D pose learning as a self-supervised adaptation problem that aims to transfer the task knowledge from a labeled source domain to a completely unpaired target.
We evaluate different self-adaptation settings and demonstrate state-of-the-art 3D human pose estimation performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-05T03:52:57Z) - Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery [70.66865453410958]
Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques.
We propose a novel framework that relies only on silhouette supervision to adapt a source-trained model-based regressor.
We develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette.
arXiv Detail & Related papers (2022-04-04T06:58:15Z) - Unsupervised Geodesic-preserved Generative Adversarial Networks for
Unconstrained 3D Pose Transfer [84.04540436494011]
We present an unsupervised approach to conduct the pose transfer between any arbitrated given 3D meshes.
Specifically, a novel Intrinsic-Extrinsic Preserved Generative Adrative Network (IEP-GAN) is presented for both intrinsic (i.e., shape) and extrinsic (i.e., pose) information preservation.
Our proposed model produces better results and is substantially more efficient compared to recent state-of-the-art methods.
arXiv Detail & Related papers (2021-08-17T09:08:21Z) - MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty
Propagation [4.202461384355329]
We propose MonoRUn, a novel 3D object detection framework that learns dense correspondences and geometry in a self-supervised manner.
Our proposed approach outperforms current state-of-the-art methods on KITTI benchmark.
arXiv Detail & Related papers (2021-03-23T15:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.