Interaction Mix and Match: Synthesizing Close Interaction using
Conditional Hierarchical GAN with Multi-Hot Class Embedding
- URL: http://arxiv.org/abs/2208.00774v2
- Date: Thu, 4 Aug 2022 12:54:29 GMT
- Title: Interaction Mix and Match: Synthesizing Close Interaction using
Conditional Hierarchical GAN with Multi-Hot Class Embedding
- Authors: Aman Goel, Qianhui Men, Edmond S. L. Ho
- Abstract summary: We propose a novel way to create realistic human reactive motions by mixing and matching different types of close interactions.
Experiments are conducted both noisy (depth-based) and high-quality (versa-based) interaction datasets.
- Score: 4.864897201841002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing multi-character interactions is a challenging task due to the
complex and varied interactions between the characters. In particular, precise
spatiotemporal alignment between characters is required in generating close
interactions such as dancing and fighting. Existing work in generating
multi-character interactions focuses on generating a single type of reactive
motion for a given sequence which results in a lack of variety of the resultant
motions. In this paper, we propose a novel way to create realistic human
reactive motions which are not presented in the given dataset by mixing and
matching different types of close interactions. We propose a Conditional
Hierarchical Generative Adversarial Network with Multi-Hot Class Embedding to
generate the Mix and Match reactive motions of the follower from a given motion
sequence of the leader. Experiments are conducted on both noisy (depth-based)
and high-quality (MoCap-based) interaction datasets. The quantitative and
qualitative results show that our approach outperforms the state-of-the-art
methods on the given datasets. We also provide an augmented dataset with
realistic reactive motions to stimulate future research in this area. The code
is available at https://github.com/Aman-Goel1/IMM
Related papers
- Versatile Motion Language Models for Multi-Turn Interactive Agents [28.736843383405603]
We introduce Versatile Interactive Motion language model, which integrates both language and motion modalities.
We evaluate the versatility of our method across motion-related tasks, motion to text, text to motion, reaction generation, motion editing, and reasoning about motion sequences.
arXiv Detail & Related papers (2024-10-08T02:23:53Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - Interaction Transformer for Human Reaction Generation [61.22481606720487]
We propose a novel interaction Transformer (InterFormer) consisting of a Transformer network with both temporal and spatial attentions.
Our method is general and can be used to generate more complex and long-term interactions.
arXiv Detail & Related papers (2022-07-04T19:30:41Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Dynamic Representation Learning with Temporal Point Processes for
Higher-Order Interaction Forecasting [8.680676599607123]
This paper proposes a temporal point process model for hyperedge prediction to address these problems.
As far as our knowledge, this is the first work that uses the temporal point process to forecast hyperedges in dynamic networks.
arXiv Detail & Related papers (2021-12-19T14:24:37Z) - GAN-based Reactive Motion Synthesis with Class-aware Discriminators for
Human-human Interaction [14.023527193608144]
We propose a semi-supervised GAN system that synthesizes the reactive motion of a character given the active motion from another character.
The high quality of the synthetic motion demonstrates the effective design of our generator, and the discriminability of the synthesis also demonstrates the strength of our discriminator.
arXiv Detail & Related papers (2021-10-01T13:13:07Z) - Unlimited Neighborhood Interaction for Heterogeneous Trajectory
Prediction [97.40338982628094]
We propose a simple yet effective Unlimited Neighborhood Interaction Network (UNIN) which predicts trajectories of heterogeneous agents in multiply categories.
Specifically, the proposed unlimited neighborhood interaction module generates the fused-features of all agents involved in an interaction simultaneously.
A hierarchical graph attention module is proposed to obtain category-tocategory interaction and agent-to-agent interaction.
arXiv Detail & Related papers (2021-07-31T13:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.