LLaMo: Scaling Pretrained Language Models for Unified Motion Understanding and Generation with Continuous Autoregressive Tokens
- URL: http://arxiv.org/abs/2602.12370v1
- Date: Thu, 12 Feb 2026 20:02:21 GMT
- Title: LLaMo: Scaling Pretrained Language Models for Unified Motion Understanding and Generation with Continuous Autoregressive Tokens
- Authors: Zekun Li, Sizhe An, Chengcheng Tang, Chuan Guo, Ivan Shugurov, Linguang Zhang, Amy Zhao, Srinath Sridhar, Lingling Tao, Abhay Mittal,
- Abstract summary: We propose LLaMo, a framework that extends pretrained large language models through a modality-specific Mixture-of-Transformers architecture.<n>We encode human motion into a causal continuous latent space and maintain the next-token prediction paradigm in the decoder-only backbone.<n>Our experiments demonstrate that LLaMo achieves high-fidelity text-to-motion generation and motion-to-text captioning in general settings.
- Score: 19.167250154665812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in large models has led to significant advances in unified multimodal generation and understanding. However, the development of models that unify motion-language generation and understanding remains largely underexplored. Existing approaches often fine-tune large language models (LLMs) on paired motion-text data, which can result in catastrophic forgetting of linguistic capabilities due to the limited scale of available text-motion pairs. Furthermore, prior methods typically convert motion into discrete representations via quantization to integrate with language models, introducing substantial jitter artifacts from discrete tokenization. To address these challenges, we propose LLaMo, a unified framework that extends pretrained LLMs through a modality-specific Mixture-of-Transformers (MoT) architecture. This design inherently preserves the language understanding of the base model while enabling scalable multimodal adaptation. We encode human motion into a causal continuous latent space and maintain the next-token prediction paradigm in the decoder-only backbone through a lightweight flow-matching head, allowing for streaming motion generation in real-time (>30 FPS). Leveraging the comprehensive language understanding of pretrained LLMs and large-scale motion-text pretraining, our experiments demonstrate that LLaMo achieves high-fidelity text-to-motion generation and motion-to-text captioning in general settings, especially zero-shot motion generation, marking a significant step towards a general unified motion-language large model.
Related papers
- DiMo: Discrete Diffusion Modeling for Motion Generation and Understanding [25.254783224309488]
We present DiMo, a discrete diffusion-style framework, which extends masked modeling to text--motion understanding and generation.<n>Unlike GPT-style autoregressive approaches that tokenize motion and decode sequentially, DiMo performs iterative masked token refinement.<n>Experiments on HumanML3D and KIT-ML show strong motion quality and competitive bidirectional understanding.
arXiv Detail & Related papers (2026-02-04T04:01:02Z) - Growing Visual Generative Capacity for Pre-Trained MLLMs [60.826355079902505]
Bridge is a pure autoregressive unified MLLM that augments pre-trained visual understanding models with generative ability.<n>We propose a semantic-to-pixel discrete representation that integrates compact semantic tokens with fine-grained pixel tokens.
arXiv Detail & Related papers (2025-10-02T00:40:02Z) - Learning Primitive Embodied World Models: Towards Scalable Robotic Learning [50.32986780156215]
We propose a novel paradigm for world modeling--Primitive Embodied World Models (PEWM)<n>By restricting video generation to fixed short horizons, our approach enables fine-grained alignment between linguistic concepts and visual representations of robotic actions.<n>Our framework bridges the gap between fine-grained physical interaction and high-level reasoning, paving the way toward scalable, interpretable, and general-purpose embodied intelligence.
arXiv Detail & Related papers (2025-08-28T14:31:48Z) - MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities [36.42160163142448]
We pioneer MG-MotionLLM, a unified motion-language model for multi-granular motion comprehension and generation.<n>We introduce a comprehensive multi-granularity training scheme by incorporating a set of novel auxiliary tasks.<n>Our MG-MotionLLM achieves superior performance on classical text-to-motion and motion-to-text tasks.
arXiv Detail & Related papers (2025-04-03T10:53:41Z) - SynerGen-VL: Towards Synergistic Image Understanding and Generation with Vision Experts and Token Folding [66.74446220401296]
We propose SynerGen-VL, a simple yet powerful encoder-free MLLM capable of both image understanding and generation.<n>We introduce the token folding mechanism and the vision-expert-based progressive alignment pretraining strategy, which effectively support high-resolution image understanding.<n>Our code and models shall be released.
arXiv Detail & Related papers (2024-12-12T18:59:26Z) - MoTe: Learning Motion-Text Diffusion Model for Multiple Generation Tasks [30.333659816277823]
We presenttextbfMoTe, a unified multi-modal model that could handle diverse tasks by learning the marginal, conditional, and joint distributions of motion and text simultaneously.<n>MoTe is composed of three components: Motion-Decoder (MED), Text-Decoder (TED), and Moti-on-Text Diffusion Model (MTDM)
arXiv Detail & Related papers (2024-11-29T15:48:24Z) - VersatileMotion: A Unified Framework for Motion Synthesis and Comprehension [26.172040706657235]
We introduce VersatileMotion, a unified motion LLM that combines a novel motion tokenizer, integrating VQ-VAE with flow matching, and an autoregressive transformer backbone.<n> VersatileMotion is the first method to handle single-agent and multi-agent motions in a single framework, achieving state-of-the-art performance on seven of these tasks.
arXiv Detail & Related papers (2024-11-26T11:28:01Z) - Human Motion Instruction Tuning [37.3026760535819]
This paper presents LLaMo, a framework for human motion instruction tuning.<n>LLaMo retains motion in its native form for instruction tuning.<n>By processing both video and motion data alongside textual inputs, LLaMo enables a flexible, human-centric analysis.
arXiv Detail & Related papers (2024-11-25T14:38:43Z) - MotionGPT-2: A General-Purpose Motion-Language Model for Motion Generation and Understanding [76.30210465222218]
MotionGPT-2 is a unified Large Motion-Language Model (LMLMLM)
It supports multimodal control conditions through pre-trained Large Language Models (LLMs)
It is highly adaptable to the challenging 3D holistic motion generation task.
arXiv Detail & Related papers (2024-10-29T05:25:34Z) - Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs [67.59291068131438]
Motion-Agent is a conversational framework designed for general human motion generation, editing, and understanding.
Motion-Agent employs an open-source pre-trained language model to develop a generative agent, MotionLLM, that bridges the gap between motion and text.
arXiv Detail & Related papers (2024-05-27T09:57:51Z) - DiverseMotion: Towards Diverse Human Motion Generation via Discrete
Diffusion [70.33381660741861]
We present DiverseMotion, a new approach for synthesizing high-quality human motions conditioned on textual descriptions.
We show that our DiverseMotion achieves the state-of-the-art motion quality and competitive motion diversity.
arXiv Detail & Related papers (2023-09-04T05:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.