PALUM: Part-based Attention Learning for Unified Motion Retargeting
- URL: http://arxiv.org/abs/2601.07272v1
- Date: Mon, 12 Jan 2026 07:29:44 GMT
- Title: PALUM: Part-based Attention Learning for Unified Motion Retargeting
- Authors: Siqi Liu, Maoyu Wang, Bo Dai, Cewu Lu,
- Abstract summary: Remotion between characters with different skeleton structures is a fundamental challenge in computer animation.<n>We present a novel approach that learns common motion representations across diverse skeleton topologies.<n>Experiments demonstrate superior performance in handling diverse skeletal structures while maintaining motion realism and semantic fidelity.
- Score: 53.17113525688095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retargeting motion between characters with different skeleton structures is a fundamental challenge in computer animation. When source and target characters have vastly different bone arrangements, maintaining the original motion's semantics and quality becomes increasingly difficult. We present PALUM, a novel approach that learns common motion representations across diverse skeleton topologies by partitioning joints into semantic body parts and applying attention mechanisms to capture spatio-temporal relationships. Our method transfers motion to target skeletons by leveraging these skeleton-agnostic representations alongside target-specific structural information. To ensure robust learning and preserve motion fidelity, we introduce a cycle consistency mechanism that maintains semantic coherence throughout the retargeting process. Extensive experiments demonstrate superior performance in handling diverse skeletal structures while maintaining motion realism and semantic fidelity, even when generalizing to previously unseen skeleton-motion combinations. We will make our implementation publicly available to support future research.
Related papers
- Beyond Global Alignment: Fine-Grained Motion-Language Retrieval via Pyramidal Shapley-Taylor Learning [56.6025512458557]
Motion-language retrieval aims to bridge the semantic gap between natural language and human motion.<n>Existing approaches predominantly focus on aligning entire motion sequences with global textual representations.<n>We propose a novel Pyramidal Shapley-Taylor (PST) learning framework for fine-grained motion-language retrieval.
arXiv Detail & Related papers (2026-01-29T16:00:12Z) - AnyTop: Character Animation Diffusion with Any Topology [54.07731933876742]
We introduce AnyTop, a diffusion model that generates motions for diverse characters with distinct motion dynamics.<n>Our work features a transformer-based denoising network, tailored for arbitrary skeleton learning.<n>Our evaluation demonstrates that AnyTops well, even with as few as three training examples per topology, and can produce motions for unseen skeletons as well.
arXiv Detail & Related papers (2025-02-24T17:00:36Z) - Motif Guided Graph Transformer with Combinatorial Skeleton Prototype Learning for Skeleton-Based Person Re-Identification [60.939250172443586]
Person re-identification (re-ID) via 3D skeleton data is a challenging task with significant value in many scenarios.<n>Existing skeleton-based methods typically assume virtual motion relations between all joints, and adopt average joint or sequence representations for learning.<n>This paper presents a generic Motif guided graph transformer with Combinatorial skeleton prototype learning (MoCos)<n>MoCos exploits structure-specific and gait-related body relations as well as features of skeleton graphs to learn effective skeleton representations for person re-ID.
arXiv Detail & Related papers (2024-12-12T08:13:29Z) - Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.<n>We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.<n>We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - Neural Marionette: Unsupervised Learning of Motion Skeleton and Latent
Dynamics from Volumetric Video [5.456297943378056]
We present Neural Marionette, an unsupervised approach that discovers the skeletal structure from a dynamic sequence.
We demonstrate that the discovered structure is even comparable to the hand-labeled ground truth in skeleton representing a 4D sequence of motion.
arXiv Detail & Related papers (2022-02-17T02:44:16Z) - A Hierarchy-Aware Pose Representation for Deep Character Animation [2.47343886645587]
We present a robust pose representation for motion modeling, suitable for deep character animation.
Our representation is based on dual quaternions, the mathematical abstractions with well-defined operations, which simultaneously encode rotational and positional orientation.
We show that our representation overcomes common motion artifacts, and assess its performance compared to other popular representations.
arXiv Detail & Related papers (2021-11-27T14:33:24Z) - Skeleton-Aware Networks for Deep Motion Retargeting [83.65593033474384]
We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
arXiv Detail & Related papers (2020-05-12T12:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.