Temporal Consistency-Aware Text-to-Motion Generation
- URL: http://arxiv.org/abs/2602.18057v1
- Date: Fri, 20 Feb 2026 08:17:01 GMT
- Title: Temporal Consistency-Aware Text-to-Motion Generation
- Authors: Hongsong Wang, Wenjing Yan, Qiuxia Lai, Xin Geng,
- Abstract summary: We propose TCA-T2M, a framework for temporal consistency-aware T2M generation.<n>Our approach introduces a temporal consistency-aware spatial VQ-VAE for cross-sequence temporal alignment.<n> Experiments on HumanML3D and KIT-ML benchmarks demonstrate that TCA-T2M achieves state-of-the-art performance.
- Score: 41.71400323450202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-to-Motion (T2M) generation aims to synthesize realistic human motion sequences from natural language descriptions. While two-stage frameworks leveraging discrete motion representations have advanced T2M research, they often neglect cross-sequence temporal consistency, i.e., the shared temporal structures present across different instances of the same action. This leads to semantic misalignments and physically implausible motions. To address this limitation, we propose TCA-T2M, a framework for temporal consistency-aware T2M generation. Our approach introduces a temporal consistency-aware spatial VQ-VAE (TCaS-VQ-VAE) for cross-sequence temporal alignment, coupled with a masked motion transformer for text-conditioned motion generation. Additionally, a kinematic constraint block mitigates discretization artifacts to ensure physical plausibility. Experiments on HumanML3D and KIT-ML benchmarks demonstrate that TCA-T2M achieves state-of-the-art performance, highlighting the importance of temporal consistency in robust and coherent T2M generation.
Related papers
- T2M Mamba: Motion Periodicity-Saliency Coupling Approach for Stable Text-Driven Motion Generation [3.6564162676635363]
Text-to-motion generation has attracted increasing attention in fields such as avatar animation and humanoid robotic interaction.<n>Models treat motion periodicity and saliency as independent factors, overlooking their coupling and causing generation drift in long sequences.<n>We propose T2M Mamba to address these limitations by (i) proposing Periodicity-Saliency Aware Mamba.
arXiv Detail & Related papers (2026-02-01T17:42:53Z) - TaCo: Capturing Spatio-Temporal Semantic Consistency in Remote Sensing Change Detection [54.22717266034045]
Ta-Co is a consistent semantic network for temporal semantic transitions.<n>We show that Ta-Co consistently achieves SOTA performance on remote sensing detection tasks.<n>This design can yield substantial gains without any additional computational overhead during inference.
arXiv Detail & Related papers (2025-11-25T13:44:29Z) - Bidirectional Feature-aligned Motion Transformation for Efficient Dynamic Point Cloud Compression [97.66080040613726]
We propose a Bidirectional Feature-aligned Motion Transformation (Bi-FMT) framework that implicitly models motion in the feature space.<n>Bi-FMT aligns features across both past and future frames to produce temporally consistent latent representations.<n>We show Bi-FMT surpasses D-DPCC and AdaDPCC in both compression efficiency and runtime.
arXiv Detail & Related papers (2025-09-18T03:51:06Z) - Identity-Preserving Text-to-Video Generation Guided by Simple yet Effective Spatial-Temporal Decoupled Representations [131.33758144860988]
Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity.<n>Current end-to-end frameworks suffer a critical spatial-temporal trade-off.<n>We propose a simple yet effective spatial-temporal decoupled framework that decomposes representations into spatial features for layouts and temporal features for motion dynamics.
arXiv Detail & Related papers (2025-07-07T06:54:44Z) - MotionGPT3: Human Motion as a Second Modality [28.616340011811843]
MotionGPT3 is a bimodal motion-language model for both understanding and generation.<n>A dual-stream Transformer with shared attention preserves modality-specific routes while enabling controlled, bidirectional information flow.<n>Experiments show that MotionGPT3 achieves 2x faster convergence in training loss and up to 4x faster convergence in validation.
arXiv Detail & Related papers (2025-06-30T17:42:22Z) - TAMMs: Temporal-Aware Multimodal Model for Satellite Image Change Understanding and Forecasting [22.01157165112828]
We introduce TAMMs, the first unified framework designed to jointly perform TCD and FSIF within a single MLLM-diffusion architecture.<n>TAMMs introduces two key innovations: Temporal Adaptation Modules (TAM) enhance frozen MLLM's ability to comprehend long-range dynamics, and Semantic-Fused Control Injection (SFCI) mechanism translates this change understanding into fine-grained generative control.<n>Extensive experiments demonstrate TAMMs significantly outperforms state-of-the-art specialist baselines on both tasks.
arXiv Detail & Related papers (2025-06-23T17:26:16Z) - STaR: Seamless Spatial-Temporal Aware Motion Retargeting with Penetration and Consistency Constraints [12.307413108334657]
We propose a novel sequence-to-sequence model for spatial-temporal motion Retargeting (STaR)<n> STaR consists of two modules: (1) a spatial module that incorporates dense shape representation and a novel limb penetration constraint to ensure geometric plausibility while preserving motion semantics, and (2) a temporal module that utilizes a temporal transformer and a temporal consistency constraint to predict the entire motion sequence at once while enforcing multi-level trajectory smoothness.
arXiv Detail & Related papers (2025-04-09T00:37:08Z) - ReCoM: Realistic Co-Speech Motion Generation with Recurrent Embedded Transformer [58.49950218437718]
We present ReCoM, an efficient framework for generating high-fidelity and generalizable human body motions synchronized with speech.<n>The core innovation lies in the Recurrent Embedded Transformer (RET), which integrates Dynamic Embedding Regularization (DER) into a Vision Transformer (ViT) core architecture.<n>To enhance model robustness, we incorporate the proposed DER strategy, which equips the model with dual capabilities of noise resistance and cross-domain generalization.
arXiv Detail & Related papers (2025-03-27T16:39:40Z) - Enhancing Low-Cost Video Editing with Lightweight Adaptors and Temporal-Aware Inversion [28.38307253613529]
We propose a framework that integrates temporal-spatial and semantic consistency with Baliteral DDIM inversion.<n>Our method significantly improves perceptual quality, text-image alignment, and temporal coherence, as demonstrated on the MSR-VTT dataset.
arXiv Detail & Related papers (2025-01-08T16:41:31Z) - Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation [49.298187741014345]
Current methods intertwine spatial content and temporal dynamics together, leading to an increased complexity of text-to-video generation (T2V)
We propose HiGen, a diffusion model-based method that improves performance by decoupling the spatial and temporal factors of videos from two perspectives.
arXiv Detail & Related papers (2023-12-07T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.