Invisible Strings: Revealing Latent Dancer-to-Dancer Interactions with Graph Neural Networks
- URL: http://arxiv.org/abs/2503.04816v1
- Date: Tue, 04 Mar 2025 20:08:31 GMT
- Title: Invisible Strings: Revealing Latent Dancer-to-Dancer Interactions with Graph Neural Networks
- Authors: Luis Vitor Zerkowski, Zixuan Wang, Ilya Vidrin, Mariel Pettee,
- Abstract summary: We use Graph Neural Networks to highlight and interpret the intricate connections shared by two dancers.<n>We demonstrate the potential for graph-based methods to construct alternate models of the collaborative dynamics of duets.
- Score: 6.67162793750123
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Dancing in a duet often requires a heightened attunement to one's partner: their orientation in space, their momentum, and the forces they exert on you. Dance artists who work in partnered settings might have a strong embodied understanding in the moment of how their movements relate to their partner's, but typical documentation of dance fails to capture these varied and subtle relationships. Working closely with dance artists interested in deepening their understanding of partnering, we leverage Graph Neural Networks (GNNs) to highlight and interpret the intricate connections shared by two dancers. Using a video-to-3D-pose extraction pipeline, we extract 3D movements from curated videos of contemporary dance duets, apply a dedicated pre-processing to improve the reconstruction, and train a GNN to predict weighted connections between the dancers. By visualizing and interpreting the predicted relationships between the two movers, we demonstrate the potential for graph-based methods to construct alternate models of the collaborative dynamics of duets. Finally, we offer some example strategies for how to use these insights to inform a generative and co-creative studio practice.
Related papers
- Dyads: Artist-Centric, AI-Generated Dance Duets [6.67162793750123]
Existing AI-generated dance methods primarily train on motion capture data from solo dance performances.<n>This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers.
arXiv Detail & Related papers (2025-03-05T22:58:03Z) - X-Dancer: Expressive Music to Human Dance Video Generation [26.544761204917336]
X-Dancer is a novel zero-shot music-driven image animation pipeline.
It creates diverse and long-range lifelike human dance videos from a single static image.
arXiv Detail & Related papers (2025-02-24T18:47:54Z) - Synergy and Synchrony in Couple Dances [62.88254856013913]
We study what extent social interaction influences one's behavior in the setting of two dancers dancing as a couple.
We first consider a baseline in which we predict a dancer's future moves conditioned only on their past motion without regard to their partner.
We then investigate the advantage of taking social information into account by conditioning also on the motion of their dancing partner.
arXiv Detail & Related papers (2024-09-06T17:59:01Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - Dance with You: The Diversity Controllable Dancer Generation via
Diffusion Models [27.82646255903689]
We introduce a novel multi-dancer synthesis task called partner dancer generation.
The core of this task is to ensure the controllable diversity of the generated partner dancer.
To address the lack of multi-person datasets, we introduce AIST-M, a new dataset for partner dancer generation.
arXiv Detail & Related papers (2023-08-23T15:54:42Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.