AnyTop: Character Animation Diffusion with Any Topology
- URL: http://arxiv.org/abs/2502.17327v1
- Date: Mon, 24 Feb 2025 17:00:36 GMT
- Title: AnyTop: Character Animation Diffusion with Any Topology
- Authors: Inbar Gat, Sigal Raab, Guy Tevet, Yuval Reshef, Amit H. Bermano, Daniel Cohen-Or,
- Abstract summary: We introduce AnyTop, a diffusion model that generates motions for diverse characters with distinct motion dynamics.<n>Our work features a transformer-based denoising network, tailored for arbitrary skeleton learning.<n>Our evaluation demonstrates that AnyTops well, even with as few as three training examples per topology, and can produce motions for unseen skeletons as well.
- Score: 54.07731933876742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating motion for arbitrary skeletons is a longstanding challenge in computer graphics, remaining largely unexplored due to the scarcity of diverse datasets and the irregular nature of the data. In this work, we introduce AnyTop, a diffusion model that generates motions for diverse characters with distinct motion dynamics, using only their skeletal structure as input. Our work features a transformer-based denoising network, tailored for arbitrary skeleton learning, integrating topology information into the traditional attention mechanism. Additionally, by incorporating textual joint descriptions into the latent feature representation, AnyTop learns semantic correspondences between joints across diverse skeletons. Our evaluation demonstrates that AnyTop generalizes well, even with as few as three training examples per topology, and can produce motions for unseen skeletons as well. Furthermore, our model's latent space is highly informative, enabling downstream tasks such as joint correspondence, temporal segmentation and motion editing. Our webpage, https://anytop2025.github.io/Anytop-page, includes links to videos and code.
Related papers
- How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects [37.10752536568922]
Motion synthesis for diverse object categories holds great potential for 3D content creation.
We present a method to generate high-fidelity motions from textual descriptions for diverse and even unseen objects.
Experiments show that our method learns to generate high-fidelity motions from textual descriptions for diverse and even unseen objects.
arXiv Detail & Related papers (2025-03-06T09:39:09Z) - Motif Guided Graph Transformer with Combinatorial Skeleton Prototype Learning for Skeleton-Based Person Re-Identification [60.939250172443586]
Person re-identification (re-ID) via 3D skeleton data is a challenging task with significant value in many scenarios.<n>Existing skeleton-based methods typically assume virtual motion relations between all joints, and adopt average joint or sequence representations for learning.<n>This paper presents a generic Motif guided graph transformer with Combinatorial skeleton prototype learning (MoCos)<n>MoCos exploits structure-specific and gait-related body relations as well as features of skeleton graphs to learn effective skeleton representations for person re-ID.
arXiv Detail & Related papers (2024-12-12T08:13:29Z) - UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons [16.52004713662265]
We present a novel diffusion model-based speech-driven gesture synthesis approach, trained on multiple gesture datasets with different skeletons.
We then capture the correlation between speech and gestures based on a diffusion model architecture using cross-local attention and self-attention.
Experiments show that UnifiedGesture outperforms recent approaches on speech-driven gesture generation in terms of CCA, FGD, and human-likeness.
arXiv Detail & Related papers (2023-09-13T16:07:25Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Hierarchical Neural Implicit Pose Network for Animation and Motion
Retargeting [66.69067601079706]
HIPNet is a neural implicit pose network trained on multiple subjects across many poses.
We employ a hierarchical skeleton-based representation to learn a signed distance function on a canonical unposed space.
We achieve state-of-the-art results on various single-subject and multi-subject benchmarks.
arXiv Detail & Related papers (2021-12-02T03:25:46Z) - A Hierarchy-Aware Pose Representation for Deep Character Animation [2.47343886645587]
We present a robust pose representation for motion modeling, suitable for deep character animation.
Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation.
We show that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations.
arXiv Detail & Related papers (2021-11-27T14:33:24Z) - Skeleton-Contrastive 3D Action Representation Learning [35.06361753065124]
This paper strives for self-supervised learning of a feature space suitable for skeleton-based action recognition.
Our approach achieves state-of-the-art performance for self-supervised learning from skeleton data on the challenging PKU and NTU datasets.
arXiv Detail & Related papers (2021-08-08T14:44:59Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - Skeleton-Aware Networks for Deep Motion Retargeting [83.65593033474384]
We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
arXiv Detail & Related papers (2020-05-12T12:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.