Intentional Choreography with Semi-Supervised Recurrent VAEs
- URL: http://arxiv.org/abs/2209.10010v1
- Date: Tue, 20 Sep 2022 21:38:51 GMT
- Title: Intentional Choreography with Semi-Supervised Recurrent VAEs
- Authors: Mathilde Papillon, Mariel Pettee, Nina Miolane
- Abstract summary: We present PirouNet, a semi-supervised recurrent variational autoencoder.
We show that PirouNet conditionally generates dance sequences in the style of the choreographer.
- Score: 3.867363075280544
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We summarize the model and results of PirouNet, a semi-supervised recurrent
variational autoencoder. Given a small amount of dance sequences labeled with
qualitative choreographic annotations, PirouNet conditionally generates dance
sequences in the style of the choreographer.
Related papers
- Lodge++: High-quality and Long Dance Generation with Vivid Choreography Patterns [48.54956784928394]
Lodge++ is a choreography framework to generate high-quality, ultra-long, and vivid dances given the music and desired genre.
To handle the challenges in computational efficiency, Lodge++ adopts a two-stage strategy to produce dances from coarse to fine.
Lodge++ is validated by extensive experiments, which show that our method can rapidly generate ultra-long dances suitable for various dance genres.
arXiv Detail & Related papers (2024-10-27T09:32:35Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation [89.50310360658791]
We present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation.
This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model.
We demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music.
arXiv Detail & Related papers (2023-08-05T16:18:57Z) - PirouNet: Creating Intentional Dance with Semi-Supervised Conditional
Recurrent Variational Autoencoders [3.867363075280544]
We propose "PirouNet", a semi-supervised conditional recurrent variational autoencoder with a dance labeling web application.
Thanks to the proposed semi-supervised approach, PirouNet only requires a small portion of the dataset to be labeled, typically on the order of 1%.
We extensively evaluate PirouNet's dance creations through a series of qualitative and quantitative metrics, validating its applicability as a tool for choreographers.
arXiv Detail & Related papers (2022-07-21T18:04:59Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action
Unit [28.877908457607678]
We design a two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure.
Our framework firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences.
We then devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions.
arXiv Detail & Related papers (2020-09-16T12:38:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.