PaCMO: Partner Dependent Human Motion Generation in Dyadic Human
Activity using Neural Operators
- URL: http://arxiv.org/abs/2211.16210v1
- Date: Fri, 25 Nov 2022 22:20:11 GMT
- Title: PaCMO: Partner Dependent Human Motion Generation in Dyadic Human
Activity using Neural Operators
- Authors: Md Ashiqur Rahman, Jasorsi Ghosh, Hrishikesh Viswanath, Kamyar
Azizzadenesheli, Aniket Bera
- Abstract summary: We propose a neural operator-based generative model which learns the distribution of human motion conditioned by the partner's motion in function spaces.
Our model can handle long unlabeled action sequences at arbitrary time resolution.
We test PaCMO on NTU RGB+D and DuetDance datasets and our model produces realistic results.
- Score: 20.45590914720127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of generating 3D human motions in dyadic activities.
In contrast to the concurrent works, which mainly focus on generating the
motion of a single actor from the textual description, we generate the motion
of one of the actors from the motion of the other participating actor in the
action. This is a particularly challenging, under-explored problem, that
requires learning intricate relationships between the motion of two actors
participating in an action and also identifying the action from the motion of
one actor. To address these, we propose partner conditioned motion operator
(PaCMO), a neural operator-based generative model which learns the distribution
of human motion conditioned by the partner's motion in function spaces through
adversarial training. Our model can handle long unlabeled action sequences at
arbitrary time resolution. We also introduce the "Functional Frechet Inception
Distance" ($F^2ID$) metric for capturing similarity between real and generated
data for function spaces. We test PaCMO on NTU RGB+D and DuetDance datasets and
our model produces realistic results evidenced by the $F^2ID$ score and the
conducted user study.
Related papers
- ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - NIFTY: Neural Object Interaction Fields for Guided Human Motion
Synthesis [21.650091018774972]
We create a neural interaction field attached to a specific object, which outputs the distance to the valid interaction manifold given a human pose as input.
This interaction field guides the sampling of an object-conditioned human motion diffusion model.
We synthesize realistic motions for sitting and lifting with several objects, outperforming alternative approaches in terms of motion quality and successful action completion.
arXiv Detail & Related papers (2023-07-14T17:59:38Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Executing your Commands via Motion Diffusion in Latent Space [51.64652463205012]
We propose a Motion Latent-based Diffusion model (MLD) to produce vivid motion sequences conforming to the given conditional inputs.
Our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks.
arXiv Detail & Related papers (2022-12-08T03:07:00Z) - GAN-based Reactive Motion Synthesis with Class-aware Discriminators for
Human-human Interaction [14.023527193608144]
We propose a semi-supervised GAN system that synthesizes the reactive motion of a character given the active motion from another character.
The high quality of the synthetic motion demonstrates the effective design of our generator, and the discriminability of the synthesis also demonstrates the strength of our discriminator.
arXiv Detail & Related papers (2021-10-01T13:13:07Z) - Scene-aware Generative Network for Human Motion Synthesis [125.21079898942347]
We propose a new framework, with the interaction between the scene and the human motion taken into account.
Considering the uncertainty of human motion, we formulate this task as a generative task.
We derive a GAN based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene.
arXiv Detail & Related papers (2021-05-31T09:05:50Z) - Action2Motion: Conditioned Generation of 3D Human Motions [28.031644518303075]
We aim to generateplausible human motion sequences in 3D.
Each sampled sequence faithfully resembles anaturalhuman bodyarticulation dynamics.
A new 3D human motion dataset, HumanAct12, is also constructed.
arXiv Detail & Related papers (2020-07-30T05:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.