FrankenMotion: Part-level Human Motion Generation and Composition
- URL: http://arxiv.org/abs/2601.10909v1
- Date: Thu, 15 Jan 2026 23:50:07 GMT
- Title: FrankenMotion: Part-level Human Motion Generation and Composition
- Authors: Chuqiao Li, Xianghui Xie, Yong Cao, Andreas Geiger, Gerard Pons-Moll,
- Abstract summary: We construct a high-quality motion dataset with atomic, temporally-aware part-level text annotations.<n>Our dataset captures asynchronous and semantically distinct part movements at fine temporal resolution.<n>Based on this dataset, we introduce a diffusion-based part-aware motion generation framework, namely FrankenMotion.
- Score: 41.84042766842064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion generation from text prompts has made remarkable progress in recent years. However, existing methods primarily rely on either sequence-level or action-level descriptions due to the absence of fine-grained, part-level motion annotations. This limits their controllability over individual body parts. In this work, we construct a high-quality motion dataset with atomic, temporally-aware part-level text annotations, leveraging the reasoning capabilities of large language models (LLMs). Unlike prior datasets that either provide synchronized part captions with fixed time segments or rely solely on global sequence labels, our dataset captures asynchronous and semantically distinct part movements at fine temporal resolution. Based on this dataset, we introduce a diffusion-based part-aware motion generation framework, namely FrankenMotion, where each body part is guided by its own temporally-structured textual prompt. This is, to our knowledge, the first work to provide atomic, temporally-aware part-level motion annotations and have a model that allows motion generation with both spatial (body part) and temporal (atomic action) control. Experiments demonstrate that FrankenMotion outperforms all previous baseline models adapted and retrained for our setting, and our model can compose motions unseen during training. Our code and dataset will be publicly available upon publication.
Related papers
- Dense Motion Captioning [23.084589115674586]
We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences.<n>We present CompMo, the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries.<n>We also present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions.
arXiv Detail & Related papers (2025-11-07T15:55:10Z) - SynMotion: Semantic-Visual Adaptation for Motion Customized Video Generation [56.90807453045657]
SynMotion is a motion-customized video generation model that jointly leverages semantic guidance and visual adaptation.<n>At the semantic level, we introduce the dual-em semantic comprehension mechanism which disentangles subject and motion representations.<n>At the visual level, we integrate efficient motion adapters into a pre-trained video generation model to enhance motion fidelity and temporal coherence.
arXiv Detail & Related papers (2025-06-30T10:09:32Z) - FTMoMamba: Motion Generation with Frequency and Text State Space Models [53.60865359814126]
We propose a novel diffusion-based FTMoMamba framework equipped with a Frequency State Space Model and a Text State Space Model.
To learn fine-grained representation, FreqSSM decomposes sequences into low-frequency and high-frequency components.
To ensure the consistency between text and motion, TextSSM encodes text features at the sentence level.
arXiv Detail & Related papers (2024-11-26T15:48:12Z) - Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models [12.221087476416056]
We propose Chronologically Accurate Retrieval to evaluate the chronological understanding of motion-language models.
We decompose textual descriptions into events, and prepare negative text samples by shuffling the order of events in compound action descriptions.
We then design a simple task for motion-language models to retrieve the more likely text from the ground truth and its chronologically shuffled version.
arXiv Detail & Related papers (2024-07-22T06:25:21Z) - Infinite Motion: Extended Motion Generation via Long Text Instructions [51.61117351997808]
"Infinite Motion" is a novel approach that leverages long text to extended motion generation.
Key innovation of our model is its ability to accept arbitrary lengths of text as input.
We incorporate the timestamp design for text which allows precise editing of local segments within the generated sequences.
arXiv Detail & Related papers (2024-07-11T12:33:56Z) - Text-controlled Motion Mamba: Text-Instructed Temporal Grounding of Human Motion [44.72078061390906]
We introduce the novel task of text-based human motion grounding (THMG)<n>TM-Mamba is a unified model that integrates temporal global context, language query control, and spatial graph topology with only linear memory cost.<n>For evaluation, we introduce BABEL-Grounding, the first text-motion dataset that provides detailed textual descriptions of human actions along with their corresponding temporal segments.
arXiv Detail & Related papers (2024-04-17T13:33:09Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - Seamless Human Motion Composition with Blended Positional Encodings [38.85158088021282]
We introduce FlowMDM, the first diffusion-based model that generates seamless Human Motion Compositions (HMC) without postprocessing or redundant denoising steps.
We achieve state-of-the-art results in terms of accuracy, realism, and smoothness on the Babel and HumanML3D datasets.
arXiv Detail & Related papers (2024-02-23T18:59:40Z) - SemanticBoost: Elevating Motion Generation with Augmented Textual Cues [73.83255805408126]
Our framework comprises a Semantic Enhancement module and a Context-Attuned Motion Denoiser (CAMD)
The CAMD approach provides an all-encompassing solution for generating high-quality, semantically consistent motion sequences.
Our experimental results demonstrate that SemanticBoost, as a diffusion-based method, outperforms auto-regressive-based techniques.
arXiv Detail & Related papers (2023-10-31T09:58:11Z) - Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion
Data and Natural Language [4.86658723641864]
We propose a novel text-to-motion retrieval task, which aims at retrieving relevant motions based on a specified natural description.
Inspired by the recent progress in text-to-image/video matching, we experiment with two widely-adopted metric-learning loss functions.
arXiv Detail & Related papers (2023-05-25T08:32:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.