GAN-based Reactive Motion Synthesis with Class-aware Discriminators for
Human-human Interaction
- URL: http://arxiv.org/abs/2110.00380v1
- Date: Fri, 1 Oct 2021 13:13:07 GMT
- Title: GAN-based Reactive Motion Synthesis with Class-aware Discriminators for
Human-human Interaction
- Authors: Qianhui Men, Hubert P. H. Shum, Edmond S. L. Ho, Howard Leung
- Abstract summary: We propose a semi-supervised GAN system that synthesizes the reactive motion of a character given the active motion from another character.
The high quality of the synthetic motion demonstrates the effective design of our generator, and the discriminability of the synthesis also demonstrates the strength of our discriminator.
- Score: 14.023527193608144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating realistic characters that can react to the users' or another
character's movement can benefit computer graphics, games and virtual reality
hugely. However, synthesizing such reactive motions in human-human interactions
is a challenging task due to the many different ways two humans can interact.
While there are a number of successful researches in adapting the generative
adversarial network (GAN) in synthesizing single human actions, there are very
few on modelling human-human interactions. In this paper, we propose a
semi-supervised GAN system that synthesizes the reactive motion of a character
given the active motion from another character. Our key insights are two-fold.
First, to effectively encode the complicated spatial-temporal information of a
human motion, we empower the generator with a part-based long short-term memory
(LSTM) module, such that the temporal movement of different limbs can be
effectively modelled. We further include an attention module such that the
temporal significance of the interaction can be learned, which enhances the
temporal alignment of the active-reactive motion pair. Second, as the reactive
motion of different types of interactions can be significantly different, we
introduce a discriminator that not only tells if the generated movement is
realistic or not, but also tells the class label of the interaction. This
allows the use of such labels in supervising the training of the generator. We
experiment with the SBU and the HHOI datasets. The high quality of the
synthetic motion demonstrates the effective design of our generator, and the
discriminability of the synthesis also demonstrates the strength of our
discriminator.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - ReGenNet: Towards Human Action-Reaction Synthesis [87.57721371471536]
We analyze the asymmetric, dynamic, synchronous, and detailed nature of human-human interactions.
We propose the first multi-setting human action-reaction benchmark to generate human reactions conditioned on given human actions.
arXiv Detail & Related papers (2024-03-18T15:33:06Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - NIFTY: Neural Object Interaction Fields for Guided Human Motion
Synthesis [21.650091018774972]
We create a neural interaction field attached to a specific object, which outputs the distance to the valid interaction manifold given a human pose as input.
This interaction field guides the sampling of an object-conditioned human motion diffusion model.
We synthesize realistic motions for sitting and lifting with several objects, outperforming alternative approaches in terms of motion quality and successful action completion.
arXiv Detail & Related papers (2023-07-14T17:59:38Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - PaCMO: Partner Dependent Human Motion Generation in Dyadic Human
Activity using Neural Operators [20.45590914720127]
We propose a neural operator-based generative model which learns the distribution of human motion conditioned by the partner's motion in function spaces.
Our model can handle long unlabeled action sequences at arbitrary time resolution.
We test PaCMO on NTU RGB+D and DuetDance datasets and our model produces realistic results.
arXiv Detail & Related papers (2022-11-25T22:20:11Z) - Interaction Transformer for Human Reaction Generation [61.22481606720487]
We propose a novel interaction Transformer (InterFormer) consisting of a Transformer network with both temporal and spatial attentions.
Our method is general and can be used to generate more complex and long-term interactions.
arXiv Detail & Related papers (2022-07-04T19:30:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.