Conditional Motion In-betweening
- URL: http://arxiv.org/abs/2202.04307v1
- Date: Wed, 9 Feb 2022 06:47:56 GMT
- Title: Conditional Motion In-betweening
- Authors: Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, Sungjoon Choi
- Abstract summary: Motion in-betweening (MIB) is a process of generating intermediate skeletal movement between the given start and target poses.
We focus on the method that can handle pose or semantic conditioned MIB tasks using a unified model.
We also present a motion augmentation method to improve the quality of pose-conditioned motion generation.
- Score: 19.470778961694453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motion in-betweening (MIB) is a process of generating intermediate skeletal
movement between the given start and target poses while preserving the
naturalness of the motion, such as periodic footstep motion while walking.
Although state-of-the-art MIB methods are capable of producing plausible
motions given sparse key-poses, they often lack the controllability to generate
motions satisfying the semantic contexts required in practical applications. We
focus on the method that can handle pose or semantic conditioned MIB tasks
using a unified model. We also present a motion augmentation method to improve
the quality of pose-conditioned motion generation via defining a distribution
over smooth trajectories. Our proposed method outperforms the existing
state-of-the-art MIB method in pose prediction errors while providing
additional controllability.
Related papers
- Motion Matters: Motion-guided Modulation Network for Skeleton-based Micro-Action Recognition [26.997350207742034]
Micro-Actions (MAs) are an important form of non-verbal communication in social interactions.<n>Existing methods in Micro-Action Recognition often overlook the inherent subtle changes in MAs.<n>We present a novel Motion-guided Modulation Network (MMN) that implicitly captures and modulates subtle motion cues.
arXiv Detail & Related papers (2025-07-29T16:27:10Z) - GENMO: A GENeralist Model for Human MOtion [64.16188966024542]
We present GENMO, a unified Generalist Model for Human Motion that bridges motion estimation and generation in a single framework.<n>Our key insight is to reformulate motion estimation as constrained motion generation, where the output motion must precisely satisfy observed conditioning signals.<n>Our novel architecture handles variable-length motions and mixed multimodal conditions (text, audio, video) at different time intervals, offering flexible control.
arXiv Detail & Related papers (2025-05-02T17:59:55Z) - Motion-Aware Generative Frame Interpolation [23.380470636851022]
Flow-based frame methods ensure motion stability through estimated intermediate flow but often introduce severe artifacts in complex motion regions.
Recent generative approaches, boosted by large-scale pre-trained video generation models, show promise in handling intricate scenes.
We propose Motion-aware Generative frame (MoG) that synergizes intermediate flow guidance with generative capacities to enhance fidelity.
arXiv Detail & Related papers (2025-01-07T11:03:43Z) - A Plug-and-Play Physical Motion Restoration Approach for In-the-Wild High-Difficulty Motions [56.709280823844374]
We introduce a mask-based motion correction module (MCM) that leverages motion context and video mask to repair flawed motions.
We also propose a physics-based motion transfer module (PTM), which employs a pretrain and adapt approach for motion imitation.
Our approach is designed as a plug-and-play module to physically refine the video motion capture results, including high-difficulty in-the-wild motions.
arXiv Detail & Related papers (2024-12-23T08:26:00Z) - MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms [12.621553130655945]
We develop a versatile set of simple yet effective motion editing methods via manipulating attention maps.
Our method enjoys good generation and editing ability with good explainability.
arXiv Detail & Related papers (2024-10-24T17:59:45Z) - Generalizable Implicit Motion Modeling for Video Frame Interpolation [51.966062283735596]
Motion is critical in flow-based Video Frame Interpolation (VFI)
We introduce General Implicit Motion Modeling (IMM), a novel and effective approach to motion modeling VFI.
Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion.
arXiv Detail & Related papers (2024-07-11T17:13:15Z) - Flexible Motion In-betweening with Diffusion Models [16.295323675781184]
We investigate the potential of diffusion models in generating diverse human motions guided by compares.
Unlike previous inbetweening methods, we propose a simple unified model capable of generating precise and diverse motions.
We evaluate the performance of CondMDI on the text-conditioned HumanML3D dataset.
arXiv Detail & Related papers (2024-05-17T23:55:51Z) - MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model [29.93359157128045]
This work introduces MotionLCM, extending controllable motion generation to a real-time level.
We first propose the motion latent consistency model (MotionLCM) for motion generation, building upon the latent diffusion model.
By adopting one-step (or few-step) inference, we further improve the runtime efficiency of the motion latent diffusion model for motion generation.
arXiv Detail & Related papers (2024-04-30T17:59:47Z) - Motion-aware Latent Diffusion Models for Video Frame Interpolation [51.78737270917301]
Motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity.
We propose a novel diffusion framework, motion-aware latent diffusion models (MADiff)
Our method achieves state-of-the-art performance significantly outperforming existing approaches.
arXiv Detail & Related papers (2024-04-21T05:09:56Z) - Motion Flow Matching for Human Motion Synthesis and Editing [75.13665467944314]
We propose emphMotion Flow Matching, a novel generative model for human motion generation featuring efficient sampling and effectiveness in motion editing applications.
Our method reduces the sampling complexity from thousand steps in previous diffusion models to just ten steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks.
arXiv Detail & Related papers (2023-12-14T12:57:35Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Guided Motion Diffusion for Controllable Human Motion Synthesis [18.660523853430497]
We propose Guided Motion Diffusion (GMD), a method that incorporates spatial constraints into the motion generation process.
Specifically, we propose an effective feature projection scheme that manipulates motion representation to enhance the coherency between spatial information and local poses.
Our experiments justify the development of GMD, which achieves a significant improvement over state-of-the-art methods in text-based motion generation.
arXiv Detail & Related papers (2023-05-21T21:54:31Z) - Data-Driven Stochastic Motion Evaluation and Optimization with Image by
Spatially-Aligned Temporal Encoding [8.104557130048407]
This paper proposes a probabilistic motion prediction for long motions. The motion is predicted so that it accomplishes a task from the initial state observed in the given image.
Our method seamlessly integrates the image and motion data into the image feature domain by spatially-aligned temporal encoding.
The effectiveness of the proposed method is demonstrated with a variety of experiments with similar SOTA methods.
arXiv Detail & Related papers (2023-02-10T04:06:00Z) - MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis [73.52948992990191]
MoFusion is a new denoising-diffusion-based framework for high-quality conditional human motion synthesis.
We present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework.
We demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature.
arXiv Detail & Related papers (2022-12-08T18:59:48Z) - Executing your Commands via Motion Diffusion in Latent Space [51.64652463205012]
We propose a Motion Latent-based Diffusion model (MLD) to produce vivid motion sequences conforming to the given conditional inputs.
Our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks.
arXiv Detail & Related papers (2022-12-08T03:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.