Towards Synthesized and Editable Motion In-Betweening Through Part-Wise Phase Representation
- URL: http://arxiv.org/abs/2503.08180v2
- Date: Thu, 13 Mar 2025 03:18:41 GMT
- Title: Towards Synthesized and Editable Motion In-Betweening Through Part-Wise Phase Representation
- Authors: Minyue Dai, Jingbo Wang, Ke Fan, Bin Ji, Haoyu Zhao, Junting Dong, Bo Dai,
- Abstract summary: styled motion in-betweening is crucial for computer animation and gaming.<n>We propose a novel framework that models motion styles at the body-part level.<n>Our approach enables more nuanced and expressive animations.
- Score: 20.697417033585577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Styled motion in-betweening is crucial for computer animation and gaming. However, existing methods typically encode motion styles by modeling whole-body motions, often overlooking the representation of individual body parts. This limitation reduces the flexibility of infilled motion, particularly in adjusting the motion styles of specific limbs independently. To overcome this challenge, we propose a novel framework that models motion styles at the body-part level, enhancing both the diversity and controllability of infilled motions. Our approach enables more nuanced and expressive animations by allowing precise modifications to individual limb motions while maintaining overall motion coherence. Leveraging phase-related insights, our framework employs periodic autoencoders to automatically extract the phase of each body part, capturing distinctive local style features. Additionally, we effectively decouple the motion source from synthesis control by integrating motion manifold learning and conditional generation techniques from both image and motion domains. This allows the motion source to generate high-quality motions across various styles, with extracted motion and style features readily available for controlled synthesis in subsequent tasks. Comprehensive evaluations demonstrate that our method achieves superior speed, robust generalization, and effective generation of extended motion sequences.
Related papers
- Dynamic Motion Blending for Versatile Motion Editing [43.10279926787476]
We introduce MotionMixCut, an online data augmentation technique that generates training triplets by blending body part motions based on input text.
We present MotionReFit, an auto-regressive diffusion model with a motion coordinator.
Our method handles both spatial and temporal motion edits directly from high-level human instructions.
arXiv Detail & Related papers (2025-03-26T17:07:24Z) - Motion Prompting: Controlling Video Generation with Motion Trajectories [57.049252242807874]
We train a video generation model conditioned on sparse or dense video trajectories.
We translate high-level user requests into detailed, semi-dense motion prompts.
We demonstrate our approach through various applications, including camera and object motion control, "interacting" with an image, motion transfer, and image editing.
arXiv Detail & Related papers (2024-12-03T18:59:56Z) - Real-time Diverse Motion In-betweening with Space-time Control [4.910937238451485]
In this work, we present a data-driven framework for generating diverse in-betweening motions for kinematic characters.
We demonstrate that our in-betweening approach can synthesize both locomotion and unstructured motions, enabling rich, versatile, and high-quality animation generation.
arXiv Detail & Related papers (2024-09-30T22:45:53Z) - Infinite Motion: Extended Motion Generation via Long Text Instructions [51.61117351997808]
"Infinite Motion" is a novel approach that leverages long text to extended motion generation.
Key innovation of our model is its ability to accept arbitrary lengths of text as input.
We incorporate the timestamp design for text which allows precise editing of local segments within the generated sequences.
arXiv Detail & Related papers (2024-07-11T12:33:56Z) - MotionClone: Training-Free Motion Cloning for Controllable Video Generation [41.621147782128396]
MotionClone is a training-free framework that enables motion cloning from reference videos to versatile motion-controlled video generation.
MotionClone exhibits proficiency in both global camera motion and local object motion, with notable superiority in terms of motion fidelity, textual alignment, and temporal consistency.
arXiv Detail & Related papers (2024-06-08T03:44:25Z) - FreeMotion: A Unified Framework for Number-free Text-to-Motion Synthesis [65.85686550683806]
This paper reconsiders motion generation and proposes to unify the single and multi-person motion by the conditional motion distribution.
Based on our framework, the current single-person motion spatial control method could be seamlessly integrated, achieving precise control of multi-person motion.
arXiv Detail & Related papers (2024-05-24T17:57:57Z) - Spectral Motion Alignment for Video Motion Transfer using Diffusion Models [54.32923808964701]
Spectral Motion Alignment (SMA) is a framework that refines and aligns motion vectors using Fourier and wavelet transforms.<n> SMA learns motion patterns by incorporating frequency-domain regularization, facilitating the learning of whole-frame global motion dynamics.<n>Extensive experiments demonstrate SMA's efficacy in improving motion transfer while maintaining computational efficiency and compatibility across various video customization frameworks.
arXiv Detail & Related papers (2024-03-22T14:47:18Z) - MotionCrafter: One-Shot Motion Customization of Diffusion Models [66.44642854791807]
We introduce MotionCrafter, a one-shot instance-guided motion customization method.
MotionCrafter employs a parallel spatial-temporal architecture that injects the reference motion into the temporal component of the base model.
During training, a frozen base model provides appearance normalization, effectively separating appearance from motion.
arXiv Detail & Related papers (2023-12-08T16:31:04Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - Motion Puzzle: Arbitrary Motion Style Transfer by Body Part [6.206196935093063]
Motion Puzzle is a novel motion style transfer network that advances the state-of-the-art in several important respects.
Our framework extracts style features from multiple style motions for different body parts and transfers them locally to the target body parts.
It can capture styles exhibited by dynamic movements, such as flapping and staggering, significantly better than previous work.
arXiv Detail & Related papers (2022-02-10T19:56:46Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.