Improving Human Motion Plausibility with Body Momentum
- URL: http://arxiv.org/abs/2509.09496v1
- Date: Thu, 11 Sep 2025 14:34:32 GMT
- Title: Improving Human Motion Plausibility with Body Momentum
- Authors: Ha Linh Nguyen, Tze Ho Elden Tse, Angela Yao,
- Abstract summary: Many studies decompose human motion into local motion in a frame attached to the root joint and global motion of the root joint in the world frame.<n>We propose using whole-body linear and angular momentum as a constraint to link local motion with global movement.<n>We introduce a new loss term that enforces consistency between the generated momentum profiles and those observed in ground-truth data.
- Score: 49.636235399261494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many studies decompose human motion into local motion in a frame attached to the root joint and global motion of the root joint in the world frame, treating them separately. However, these two components are not independent. Global movement arises from interactions with the environment, which are, in turn, driven by changes in the body configuration. Motion models often fail to precisely capture this physical coupling between local and global dynamics, while deriving global trajectories from joint torques and external forces is computationally expensive and complex. To address these challenges, we propose using whole-body linear and angular momentum as a constraint to link local motion with global movement. Since momentum reflects the aggregate effect of joint-level dynamics on the body's movement through space, it provides a physically grounded way to relate local joint behavior to global displacement. Building on this insight, we introduce a new loss term that enforces consistency between the generated momentum profiles and those observed in ground-truth data. Incorporating our loss reduces foot sliding and jitter, improves balance, and preserves the accuracy of the recovered motion. Code and data are available at the project page https://hlinhn.github.io/momentum_bmvc.
Related papers
- CLOT: Closed-Loop Global Motion Tracking for Whole-Body Humanoid Teleoperation [54.7399209456857]
We present CLOT, a real-time whole-body humanoid teleoperation system that achieves closed-loop global motion tracking.<n>CLOT synchronizes operator and robot poses in a closed loop, enabling drift-free human-to-humanoid mimicry over long timehorizons.<n>We propose a data-driven randomization strategy that decouples observation trajectories from reward evaluation, enabling smooth and stable global corrections.
arXiv Detail & Related papers (2026-02-13T12:03:13Z) - MoReact: Generating Reactive Motion from Textual Descriptions [57.642436102978245]
MoReact is a diffusion-based method designed to disentangle the generation of global trajectories and local motions sequentially.<n>Our experiments, utilizing data adapted from a two-person motion dataset, demonstrate the efficacy of our approach.
arXiv Detail & Related papers (2025-09-28T14:31:41Z) - CoDA: Coordinated Diffusion Noise Optimization for Whole-Body Manipulation of Articulated Objects [14.230098033626744]
Whole-body manipulation of articulated objects is a critical yet challenging task with broad applications in virtual humans and robotics.<n>We propose a novel coordinated diffusion noise optimization framework to achieve realistic whole-body motion.<n>We conduct extensive experiments demonstrating that our method outperforms existing approaches in motion quality and physical plausibility.
arXiv Detail & Related papers (2025-05-27T17:11:50Z) - Diffgrasp: Whole-Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model [25.00532805042292]
We propose a simple yet effective framework that jointly models the relationship between the body, hands, and the given object motion sequences.<n>We introduce novel contact-aware losses and incorporate a data-driven, carefully designed guidance.<n> Experimental results demonstrate that our approach outperforms the state-of-the-art method and generates plausible whole-body motion sequences.
arXiv Detail & Related papers (2024-12-30T02:21:43Z) - KinMo: Kinematic-aware Human Motion Understanding and Generation [6.962697597686156]
Current human motion synthesis frameworks rely on global action descriptions.<n>A single coarse description, such as run, fails to capture details such as variations in speed, limb positioning, and kinematic dynamics.<n>We introduce KinMo, a unified framework built on a hierarchical describable motion representation.
arXiv Detail & Related papers (2024-11-23T06:50:11Z) - Local Action-Guided Motion Diffusion Model for Text-to-Motion Generation [52.87672306545577]
Existing motion generation methods primarily focus on the direct synthesis of global motions.
We propose the local action-guided motion diffusion model, which facilitates global motion generation by utilizing local actions as fine-grained control signals.
Our method provides flexibility in seamlessly combining various local actions and continuous guiding weight adjustment.
arXiv Detail & Related papers (2024-07-15T08:35:00Z) - GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction [61.833152949826946]
We propose a novel Ground-aware Motion Model for 3D Human Motion Reconstruction, named GraMMaR.
GraMMaR learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence.
It is trained to explicitly promote consistency between the motion and distance change towards the ground.
arXiv Detail & Related papers (2023-06-29T07:22:20Z) - Global-local Motion Transformer for Unsupervised Skeleton-based Action
Learning [23.051184131833292]
We propose a new transformer model for the task of unsupervised learning of skeleton motion sequences.
The proposed model successfully learns local dynamics of the joints and captures global context from the motion sequences.
arXiv Detail & Related papers (2022-07-13T10:18:07Z) - GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras [99.07219478953982]
We present an approach for 3D global human mesh recovery from monocular videos recorded with dynamic cameras.
We first propose a deep generative motion infiller, which autoregressively infills the body motions of occluded humans based on visible motions.
In contrast to prior work, our approach reconstructs human meshes in consistent global coordinates even with dynamic cameras.
arXiv Detail & Related papers (2021-12-02T18:59:54Z) - An Attractor-Guided Neural Networks for Skeleton-Based Human Motion
Prediction [0.4568777157687961]
Joint modeling is a curial component in human motion prediction.
We learn a medium, called balance attractor (BA), fromtemporal features to characterize the global motion features.
Through the BA, all joints are related synchronously, and thus the global coordination of all joints can be better learned.
arXiv Detail & Related papers (2021-05-20T12:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.