Learning to dance: A graph convolutional adversarial network to generate
realistic dance motions from audio
- URL: http://arxiv.org/abs/2011.12999v2
- Date: Mon, 30 Nov 2020 17:59:15 GMT
- Title: Learning to dance: A graph convolutional adversarial network to generate
realistic dance motions from audio
- Authors: Jo\~ao P. Ferreira, Thiago M. Coutinho, Thiago L. Gomes, Jos\'e F.
Neto, Rafael Azevedo, Renato Martins, Erickson R. Nascimento
- Abstract summary: Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly.
In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information.
Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles.
- Score: 7.612064511889756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing human motion through learning techniques is becoming an
increasingly popular approach to alleviating the requirement of new data
capture to produce animations. Learning to move naturally from music, i.e., to
dance, is one of the more complex motions humans often perform effortlessly.
Each dance movement is unique, yet such movements maintain the core
characteristics of the dance style. Most approaches addressing this problem
with classical convolutional and recursive neural models undergo training and
variability issues due to the non-Euclidean geometry of the motion manifold
structure.In this paper, we design a novel method based on graph convolutional
networks to tackle the problem of automatic dance generation from audio
information. Our method uses an adversarial learning scheme conditioned on the
input music audios to create natural motions preserving the key movements of
different music styles. We evaluate our method with three quantitative metrics
of generative methods and a user study. The results suggest that the proposed
GCN model outperforms the state-of-the-art dance generation method conditioned
on music in different experiments. Moreover, our graph-convolutional approach
is simpler, easier to be trained, and capable of generating more realistic
motion styles regarding qualitative and different quantitative metrics. It also
presented a visual movement perceptual quality comparable to real motion data.
Related papers
- DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation [89.50310360658791]
We present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation.
This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model.
We demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music.
arXiv Detail & Related papers (2023-08-05T16:18:57Z) - TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration [75.37311932218773]
We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities.
Our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities.
arXiv Detail & Related papers (2023-04-05T12:58:33Z) - Dance Style Transfer with Cross-modal Transformer [17.216186480300756]
CycleDance is a dance style transfer system to transform an existing motion clip in one dance style to a motion clip in another dance style.
Our method extends an existing CycleGAN architecture for modeling audio sequences and integrates multimodal transformer encoders to account for music context.
arXiv Detail & Related papers (2022-08-19T15:48:30Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.