ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action
Unit
- URL: http://arxiv.org/abs/2009.07637v1
- Date: Wed, 16 Sep 2020 12:38:19 GMT
- Title: ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action
Unit
- Authors: Zijie Ye, Haozhe Wu, Jia Jia, Yaohua Bu, Wei Chen, Fanbo Meng, Yanfeng
Wang
- Abstract summary: We design a two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure.
Our framework firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences.
We then devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions.
- Score: 28.877908457607678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dance and music are two highly correlated artistic forms. Synthesizing dance
motions has attracted much attention recently. Most previous works conduct
music-to-dance synthesis via directly music to human skeleton keypoints
mapping. Meanwhile, human choreographers design dance motions from music in a
two-stage manner: they firstly devise multiple choreographic dance units
(CAUs), each with a series of dance motions, and then arrange the CAU sequence
according to the rhythm, melody and emotion of the music. Inspired by these, we
systematically study such two-stage choreography approach and construct a
dataset to incorporate such choreography knowledge. Based on the constructed
dataset, we design a two-stage music-to-dance synthesis framework ChoreoNet to
imitate human choreography procedure. Our framework firstly devises a CAU
prediction model to learn the mapping relationship between music and CAU
sequences. Afterwards, we devise a spatial-temporal inpainting model to convert
the CAU sequence into continuous dance motions. Experimental results
demonstrate that the proposed ChoreoNet outperforms baseline methods (0.622 in
terms of CAU BLEU score and 1.59 in terms of user study score).
Related papers
- Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - LM2D: Lyrics- and Music-Driven Dance Synthesis [28.884929875333846]
LM2D is designed to create dance conditioned on both music and lyrics in one diffusion generation step.
We introduce the first 3D dance-motion dataset that encompasses both music and lyrics, obtained with pose estimation technologies.
The results demonstrate LM2D is able to produce realistic and diverse dance matching both lyrics and music.
arXiv Detail & Related papers (2024-03-14T13:59:04Z) - TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration [75.37311932218773]
We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities.
Our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities.
arXiv Detail & Related papers (2023-04-05T12:58:33Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Dual Learning Music Composition and Dance Choreography [57.55406449959893]
Music and dance have always co-existed as pillars of human activities, contributing immensely to cultural, social, and entertainment functions.
Recent research works have studied generative models for dance sequences conditioned on music.
We propose a novel extension, where we jointly model both tasks in a dual learning approach.
arXiv Detail & Related papers (2022-01-28T09:20:28Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.