Dyads: Artist-Centric, AI-Generated Dance Duets
- URL: http://arxiv.org/abs/2503.03954v1
- Date: Wed, 05 Mar 2025 22:58:03 GMT
- Title: Dyads: Artist-Centric, AI-Generated Dance Duets
- Authors: Zixuan Wang, Luis Zerkowski, Ilya Vidrin, Mariel Pettee,
- Abstract summary: Existing AI-generated dance methods primarily train on motion capture data from solo dance performances.<n>This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers.
- Score: 6.67162793750123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing AI-generated dance methods primarily train on motion capture data from solo dance performances, but a critical feature of dance in nearly any genre is the interaction of two or more bodies in space. Moreover, many works at the intersection of AI and dance fail to incorporate the ideas and needs of the artists themselves into their development process, yielding models that produce far more useful insights for the AI community than for the dance community. This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers and detailing how the technical methodology can be shaped by ongoing co-creation with the artistic stakeholders who curated the movement data. Our model is a probability-and-attention-based Variational Autoencoder that generates a choreographic partner conditioned on an input dance sequence. We construct a custom loss function to enhance the smoothness and coherence of the generated choreography. Our code is open-source, and we also document strategies for other interdisciplinary research teams to facilitate collaboration and strong communication between artists and technologists.
Related papers
- Invisible Strings: Revealing Latent Dancer-to-Dancer Interactions with Graph Neural Networks [6.67162793750123]
We use Graph Neural Networks to highlight and interpret the intricate connections shared by two dancers.
We demonstrate the potential for graph-based methods to construct alternate models of the collaborative dynamics of duets.
arXiv Detail & Related papers (2025-03-04T20:08:31Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - LM2D: Lyrics- and Music-Driven Dance Synthesis [28.884929875333846]
LM2D is designed to create dance conditioned on both music and lyrics in one diffusion generation step.
We introduce the first 3D dance-motion dataset that encompasses both music and lyrics, obtained with pose estimation technologies.
The results demonstrate LM2D is able to produce realistic and diverse dance matching both lyrics and music.
arXiv Detail & Related papers (2024-03-14T13:59:04Z) - DisCo: Disentangled Control for Realistic Human Dance Generation [125.85046815185866]
We introduce DISCO, which includes a novel model architecture with disentangled control to improve the compositionality of dance synthesis.
DisCc can generate high-quality human dance images and videos with diverse appearances and flexible motions.
arXiv Detail & Related papers (2023-06-30T17:37:48Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Dual Learning Music Composition and Dance Choreography [57.55406449959893]
Music and dance have always co-existed as pillars of human activities, contributing immensely to cultural, social, and entertainment functions.
Recent research works have studied generative models for dance sequences conditioned on music.
We propose a novel extension, where we jointly model both tasks in a dual learning approach.
arXiv Detail & Related papers (2022-01-28T09:20:28Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Transflower: probabilistic autoregressive dance generation with
multimodal attention [31.308435764603658]
We present a novel probabilistic autoregressive architecture that models the distribution over future poses with a normalizing flow conditioned on previous poses as well as music context.
Second, we introduce the currently largest 3D dance-motion dataset, obtained with a variety of motion-capture technologies, and including both professional and casual dancers.
arXiv Detail & Related papers (2021-06-25T20:14:28Z) - ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action
Unit [28.877908457607678]
We design a two-stage music-to-dance synthesis framework ChoreoNet to imitate human choreography procedure.
Our framework firstly devises a CAU prediction model to learn the mapping relationship between music and CAU sequences.
We then devise a spatial-temporal inpainting model to convert the CAU sequence into continuous dance motions.
arXiv Detail & Related papers (2020-09-16T12:38:19Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.