Music2Dance: DanceNet for Music-driven Dance Generation
- URL: http://arxiv.org/abs/2002.03761v2
- Date: Tue, 10 Mar 2020 18:32:15 GMT
- Title: Music2Dance: DanceNet for Music-driven Dance Generation
- Authors: Wenlin Zhuang, Congyi Wang, Siyu Xia, Jinxiang Chai, Yangang Wang
- Abstract summary: We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
- Score: 11.73506542921528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesize human motions from music, i.e., music to dance, is appealing and
attracts lots of research interests in recent years. It is challenging due to
not only the requirement of realistic and complex human motions for dance, but
more importantly, the synthesized motions should be consistent with the style,
rhythm and melody of the music. In this paper, we propose a novel
autoregressive generative model, DanceNet, to take the style, rhythm and melody
of music as the control signals to generate 3D dance motions with high realism
and diversity. To boost the performance of our proposed model, we capture
several synchronized music-dance pairs by professional dancers, and build a
high-quality music-dance pair dataset. Experiments have demonstrated that the
proposed method can achieve the state-of-the-art results.
Related papers
- Lodge++: High-quality and Long Dance Generation with Vivid Choreography Patterns [48.54956784928394]
Lodge++ is a choreography framework to generate high-quality, ultra-long, and vivid dances given the music and desired genre.
To handle the challenges in computational efficiency, Lodge++ adopts a two-stage strategy to produce dances from coarse to fine.
Lodge++ is validated by extensive experiments, which show that our method can rapidly generate ultra-long dances suitable for various dance genres.
arXiv Detail & Related papers (2024-10-27T09:32:35Z) - LM2D: Lyrics- and Music-Driven Dance Synthesis [28.884929875333846]
LM2D is designed to create dance conditioned on both music and lyrics in one diffusion generation step.
We introduce the first 3D dance-motion dataset that encompasses both music and lyrics, obtained with pose estimation technologies.
The results demonstrate LM2D is able to produce realistic and diverse dance matching both lyrics and music.
arXiv Detail & Related papers (2024-03-14T13:59:04Z) - Bidirectional Autoregressive Diffusion Model for Dance Generation [26.449135437337034]
We propose a Bidirectional Autoregressive Diffusion Model (BADM) for music-to-dance generation.
A bidirectional encoder is built to enforce that the generated dance is harmonious in both the forward and backward directions.
To make the generated dance motion smoother, a local information decoder is built for local motion enhancement.
arXiv Detail & Related papers (2024-02-06T19:42:18Z) - DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation [89.50310360658791]
We present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation.
This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model.
We demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music.
arXiv Detail & Related papers (2023-08-05T16:18:57Z) - FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance
Generation [33.9261932800456]
FineDance is the largest music-dance paired dataset with the most dance genres.
To address monotonous and unnatural hand movements existing in previous methods, we propose a full-body dance generation network.
To further enhance the genre-matching and long-term stability of generated dances, we propose a Genre&Coherent aware Retrieval Module.
arXiv Detail & Related papers (2022-12-07T16:10:08Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.