Reimagining Dance: Real-time Music Co-creation between Dancers and AI
- URL: http://arxiv.org/abs/2506.12008v1
- Date: Fri, 13 Jun 2025 17:56:53 GMT
- Title: Reimagining Dance: Real-time Music Co-creation between Dancers and AI
- Authors: Olga Vechtomova, Jeff Bos,
- Abstract summary: We present a system that enables dancers to dynamically shape musical environments through their movements.<n>Our multi-modal architecture creates a coherent musical composition by intelligently combining pre-recorded musical clips in response to dance movements.
- Score: 5.708964539699851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dance performance traditionally follows a unidirectional relationship where movement responds to music. While AI has advanced in various creative domains, its application in dance has primarily focused on generating choreography from musical input. We present a system that enables dancers to dynamically shape musical environments through their movements. Our multi-modal architecture creates a coherent musical composition by intelligently combining pre-recorded musical clips in response to dance movements, establishing a bidirectional creative partnership where dancers function as both performers and composers. Through correlation analysis of performance data, we demonstrate emergent communication patterns between movement qualities and audio features. This approach reconceptualizes the role of AI in performing arts as a responsive collaborator that expands possibilities for both professional dance performance and improvisational artistic expression across broader populations.
Related papers
- ChoreoMuse: Robust Music-to-Dance Video Generation with Style Transfer and Beat-Adherent Motion [10.21851621470535]
We introduce ChoreoMuse, a diffusion-based framework that uses SMPL format parameters and their variation version as intermediaries between music and video generation.<n>ChoreoMuse supports style-controllable, high-fidelity dance video generation across diverse musical genres and individual dancer characteristics.<n>Our method employs a novel music encoder MotionTune to capture motion cues from audio, ensuring that the generated choreography closely follows the beat and expressive qualities of the input music.
arXiv Detail & Related papers (2025-07-26T07:17:50Z) - Dyads: Artist-Centric, AI-Generated Dance Duets [6.67162793750123]
Existing AI-generated dance methods primarily train on motion capture data from solo dance performances.<n>This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers.
arXiv Detail & Related papers (2025-03-05T22:58:03Z) - Invisible Strings: Revealing Latent Dancer-to-Dancer Interactions with Graph Neural Networks [6.67162793750123]
We use Graph Neural Networks to highlight and interpret the intricate connections shared by two dancers.<n>We demonstrate the potential for graph-based methods to construct alternate models of the collaborative dynamics of duets.
arXiv Detail & Related papers (2025-03-04T20:08:31Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - Learning Music-Dance Representations through Explicit-Implicit Rhythm
Synchronization [22.279424952432677]
Music-dance representation can be applied to three downstream tasks: (a) dance classification, (b) music-dance retrieval, and (c) music-dance.
We derive the dance rhythms based on visual appearance and motion cues inspired by the music rhythm analysis. Then the visual rhythms are temporally aligned with the music counterparts, which are extracted by the amplitude of sound intensity.
arXiv Detail & Related papers (2022-07-07T09:44:44Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Dual Learning Music Composition and Dance Choreography [57.55406449959893]
Music and dance have always co-existed as pillars of human activities, contributing immensely to cultural, social, and entertainment functions.
Recent research works have studied generative models for dance sequences conditioned on music.
We propose a novel extension, where we jointly model both tasks in a dual learning approach.
arXiv Detail & Related papers (2022-01-28T09:20:28Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Feel The Music: Automatically Generating A Dance For An Input Song [58.653867648572]
We present a general computational approach that enables a machine to generate a dance for any input music.
We encode intuitive, flexibles for what a 'good' dance is: the structure of the dance should align with the structure of the music.
arXiv Detail & Related papers (2020-06-21T20:29:50Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.