Generative Adversarial Graph Convolutional Networks for Human Action
Synthesis
- URL: http://arxiv.org/abs/2110.11191v3
- Date: Mon, 25 Oct 2021 07:25:28 GMT
- Title: Generative Adversarial Graph Convolutional Networks for Human Action
Synthesis
- Authors: Bruno Degardin, Jo\~ao Neves, Vasco Lopes, Jo\~ao Brito, Ehsan
Yaghoubi and Hugo Proen\c{c}a
- Abstract summary: We propose Kinetic-GAN, a novel architecture to synthesise the kinetics of the human body.
The proposed adversarial architecture can condition up to 120 different actions over local and global body movements.
Our experiments were carried out in three well-known datasets.
- Score: 3.0664963196464448
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Synthesising the spatial and temporal dynamics of the human body skeleton
remains a challenging task, not only in terms of the quality of the generated
shapes, but also of their diversity, particularly to synthesise realistic body
movements of a specific action (action conditioning). In this paper, we propose
Kinetic-GAN, a novel architecture that leverages the benefits of Generative
Adversarial Networks and Graph Convolutional Networks to synthesise the
kinetics of the human body. The proposed adversarial architecture can condition
up to 120 different actions over local and global body movements while
improving sample quality and diversity through latent space disentanglement and
stochastic variations. Our experiments were carried out in three well-known
datasets, where Kinetic-GAN notably surpasses the state-of-the-art methods in
terms of distribution quality metrics while having the ability to synthesise
more than one order of magnitude regarding the number of different actions. Our
code and models are publicly available at
https://github.com/DegardinBruno/Kinetic-GAN.
Related papers
- Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - GUESS:GradUally Enriching SyntheSis for Text-Driven Human Motion
Generation [23.435588151215594]
We propose a novel cascaded diffusion-based generative framework for text-driven human motion synthesis.
The framework exploits a strategy named GradUally Enriching SyntheSis (GUESS) as its abbreviation.
We show that GUESS outperforms existing state-of-the-art methods by large margins in terms of accuracy, realisticness, and diversity.
arXiv Detail & Related papers (2024-01-04T08:48:21Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Discovering mesoscopic descriptions of collective movement with neural
stochastic modelling [4.7163839266526315]
Collective motion at small to medium group sizes ($sim$10-1000 individuals, also called the meso') can show nontrivial features due to order.
Here, we use a physics-inspired, network based approach to characterize the neural group dynamics of interacting individuals.
We apply this technique on both synthetic and real-world datasets, and identify the deterministic and aspects of the dynamics using drift and diffusion fields.
arXiv Detail & Related papers (2023-03-17T11:49:17Z) - Unifying Human Motion Synthesis and Style Transfer with Denoising
Diffusion Probabilistic Models [9.789705536694665]
Generating realistic motions for digital humans is a core but challenging part of computer animations and games.
We propose a denoising diffusion model solution for styled motion synthesis.
We design a multi-task architecture of diffusion model that strategically generates aspects of human motions for local guidance.
arXiv Detail & Related papers (2022-12-16T15:15:34Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis [117.15586710830489]
We focus on the problem of synthesizing diverse scene-aware human motions under the guidance of target action sequences.
Based on this factorized scheme, a hierarchical framework is proposed, with each sub-module responsible for modeling one aspect.
Experiment results show that the proposed framework remarkably outperforms previous methods in terms of diversity and naturalness.
arXiv Detail & Related papers (2022-05-25T18:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.