Weakly-supervised 3D Pose Transfer with Keypoints
- URL: http://arxiv.org/abs/2307.13459v2
- Date: Thu, 17 Aug 2023 06:02:11 GMT
- Title: Weakly-supervised 3D Pose Transfer with Keypoints
- Authors: Jinnan Chen, Chen Li, Gim Hee Lee
- Abstract summary: Main challenges of 3D pose transfer are: 1) Lack of paired training data with different characters performing the same pose; 2) Disentangling pose and shape information from the target mesh; 3) Difficulty in applying to meshes with different topologies.
We propose a novel weakly-supervised keypoint-based framework to overcome these difficulties.
- Score: 57.66991032263699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main challenges of 3D pose transfer are: 1) Lack of paired training data
with different characters performing the same pose; 2) Disentangling pose and
shape information from the target mesh; 3) Difficulty in applying to meshes
with different topologies. We thus propose a novel weakly-supervised
keypoint-based framework to overcome these difficulties. Specifically, we use a
topology-agnostic keypoint detector with inverse kinematics to compute
transformations between the source and target meshes. Our method only requires
supervision on the keypoints, can be applied to meshes with different
topologies and is shape-invariant for the target which allows extraction of
pose-only information from the target meshes without transferring shape
information. We further design a cycle reconstruction to perform
self-supervised pose transfer without the need for ground truth deformed mesh
with the same pose and shape as the target and source, respectively. We
evaluate our approach on benchmark human and animal datasets, where we achieve
superior performance compared to the state-of-the-art unsupervised approaches
and even comparable performance with the fully supervised approaches. We test
on the more challenging Mixamo dataset to verify our approach's ability in
handling meshes with different topologies and complex clothes. Cross-dataset
evaluation further shows the strong generalization ability of our approach.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - SkelFormer: Markerless 3D Pose and Shape Estimation using Skeletal Transformers [57.46911575980854]
We introduce SkelFormer, a novel markerless motion capture pipeline for multi-view human pose and shape estimation.
Our method first uses off-the-shelf 2D keypoint estimators, pre-trained on large-scale in-the-wild data, to obtain 3D joint positions.
Next, we design a regression-based inverse-kinematic skeletal transformer that maps the joint positions to pose and shape representations from heavily noisy observations.
arXiv Detail & Related papers (2024-04-19T04:51:18Z) - Diffusion-Driven Self-Supervised Learning for Shape Reconstruction and Pose Estimation [26.982199143972835]
We introduce a diffusion-driven self-supervised network for multi-object shape reconstruction and categorical pose estimation.
Our method significantly outperforms state-of-the-art self-supervised category-level baselines and even surpasses some fully-supervised instance-level and category-level methods.
arXiv Detail & Related papers (2024-03-19T13:43:27Z) - MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point
Contrastive Learning [32.97354536302333]
3D pose transfer is a challenging generation task that aims to transfer the pose of a source geometry onto a target geometry with the target identity preserved.
Current pose transfer methods allow end-to-end correspondence learning but require the desired final output as ground truth for supervision.
We present a novel self-supervised framework for 3D pose transfer which can be trained in unsupervised, semi-supervised, or fully supervised settings.
arXiv Detail & Related papers (2023-04-26T20:42:40Z) - Unsupervised 3D Pose Transfer with Cross Consistency and Dual
Reconstruction [50.94171353583328]
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information.
Deep learning-based methods improved the efficiency and performance of 3D pose transfer.
We present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer.
arXiv Detail & Related papers (2022-11-18T15:09:56Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - DFC: Deep Feature Consistency for Robust Point Cloud Registration [0.4724825031148411]
We present a novel learning-based alignment network for complex alignment scenes.
We validate our approach on the 3DMatch dataset and the KITTI odometry dataset.
arXiv Detail & Related papers (2021-11-15T08:27:21Z) - Benchmarking Unsupervised Object Representations for Video Sequences [111.81492107649889]
We compare the perceptual abilities of four object-centric approaches: ViMON, OP3, TBA and SCALOR.
Our results suggest that the architectures with unconstrained latent representations learn more powerful representations in terms of object detection, segmentation and tracking.
Our benchmark may provide fruitful guidance towards learning more robust object-centric video representations.
arXiv Detail & Related papers (2020-06-12T09:37:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.