DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics
- URL: http://arxiv.org/abs/2309.13742v1
- Date: Sun, 24 Sep 2023 20:25:59 GMT
- Title: DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics
- Authors: Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu
- Abstract summary: We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.
- Score: 21.00283279991885
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Synthesizing realistic human movements, dynamically responsive to the
environment, is a long-standing objective in character animation, with
applications in computer vision, sports, and healthcare, for motion prediction
and data augmentation. Recent kinematics-based generative motion models offer
impressive scalability in modeling extensive motion data, albeit without an
interface to reason about and interact with physics. While
simulator-in-the-loop learning approaches enable highly physically realistic
behaviors, the challenges in training often affect scalability and adoption. We
introduce DROP, a novel framework for modeling Dynamics Responses of humans
using generative mOtion prior and Projective dynamics. DROP can be viewed as a
highly stable, minimalist physics-based human simulator that interfaces with a
kinematics-based generative motion prior. Utilizing projective dynamics, DROP
allows flexible and simple integration of the learned motion prior as one of
the projective energies, seamlessly incorporating control provided by the
motion prior with Newtonian dynamics. Serving as a model-agnostic plug-in, DROP
enables us to fully leverage recent advances in generative motion models for
physics-based motion synthesis. We conduct extensive evaluations of our model
across different motion tasks and various physical perturbations, demonstrating
the scalability and diversity of responses.
Related papers
- Pre-Trained Video Generative Models as World Simulators [59.546627730477454]
We propose Dynamic World Simulation (DWS) to transform pre-trained video generative models into controllable world simulators.
To achieve precise alignment between conditioned actions and generated visual changes, we introduce a lightweight, universal action-conditioned module.
Experiments demonstrate that DWS can be versatilely applied to both diffusion and autoregressive transformer models.
arXiv Detail & Related papers (2025-02-10T14:49:09Z) - FlexMotion: Lightweight, Physics-Aware, and Controllable Human Motion Generation [12.60677181866807]
We propose a novel framework that leverages a computationally lightweight diffusion model operating in the latent space.
FlexMotion employs a multimodal pre-trained Transformer encoder-decoder, integrating joint locations, contact forces, joint actuations and muscle activations.
We evaluate FlexMotion on extended datasets and demonstrate its superior performance in terms of realism, physical plausibility, and controllability.
arXiv Detail & Related papers (2025-01-28T08:02:21Z) - InterDyn: Controllable Interactive Dynamics with Video Diffusion Models [50.38647583839384]
We propose InterDyn, a framework that generates videos of interactive dynamics given an initial frame and a control signal encoding the motion of a driving object or actor.
Our key insight is that large video foundation models can act as both neurals and implicit physics simulators by learning interactive dynamics from large-scale video data.
arXiv Detail & Related papers (2024-12-16T13:57:02Z) - Morph: A Motion-free Physics Optimization Framework for Human Motion Generation [25.51726849102517]
Our framework achieves state-of-the-art motion generation quality while improving physical plausibility drastically.
Experiments on text-to-motion and music-to-dance generation tasks demonstrate that our framework achieves state-of-the-art motion generation quality.
arXiv Detail & Related papers (2024-11-22T14:09:56Z) - Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos [6.093379844890164]
We propose a novel method to selectively incorporate the physics models with the kinematics observations in an online setting.
A recurrent neural network is introduced to realize a Kalman filter that attentively balances the kinematics input and simulated motion.
The proposed approach excels in the physics-based human pose estimation task and demonstrates the physical plausibility of the predictive dynamics.
arXiv Detail & Related papers (2024-10-10T10:24:59Z) - PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation [62.53760963292465]
PhysDreamer is a physics-based approach that endows static 3D objects with interactive dynamics.
We present our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study.
arXiv Detail & Related papers (2024-04-19T17:41:05Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - Physics-based Human Motion Estimation and Synthesis from Videos [0.0]
We propose a framework for training generative models of physically plausible human motion directly from monocular RGB videos.
At the core of our method is a novel optimization formulation that corrects imperfect image-based pose estimations.
Results show that our physically-corrected motions significantly outperform prior work on pose estimation.
arXiv Detail & Related papers (2021-09-21T01:57:54Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.