Synergy and Synchrony in Couple Dances
- URL: http://arxiv.org/abs/2409.04440v1
- Date: Fri, 6 Sep 2024 17:59:01 GMT
- Title: Synergy and Synchrony in Couple Dances
- Authors: Vongani Maluleke, Lea Müller, Jathushan Rajasegaran, Georgios Pavlakos, Shiry Ginosar, Angjoo Kanazawa, Jitendra Malik,
- Abstract summary: We study what extent social interaction influences one's behavior in the setting of two dancers dancing as a couple.
We first consider a baseline in which we predict a dancer's future moves conditioned only on their past motion without regard to their partner.
We then investigate the advantage of taking social information into account by conditioning also on the motion of their dancing partner.
- Score: 62.88254856013913
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper asks to what extent social interaction influences one's behavior. We study this in the setting of two dancers dancing as a couple. We first consider a baseline in which we predict a dancer's future moves conditioned only on their past motion without regard to their partner. We then investigate the advantage of taking social information into account by conditioning also on the motion of their dancing partner. We focus our analysis on Swing, a dance genre with tight physical coupling for which we present an in-the-wild video dataset. We demonstrate that single-person future motion prediction in this context is challenging. Instead, we observe that prediction greatly benefits from considering the interaction partners' behavior, resulting in surprisingly compelling couple dance synthesis results (see supp. video). Our contributions are a demonstration of the advantages of socially conditioned future motion prediction and an in-the-wild, couple dance video dataset to enable future research in this direction. Video results are available on the project website: https://von31.github.io/synNsync
Related papers
- May the Dance be with You: Dance Generation Framework for Non-Humanoids [6.029154142020362]
We propose a framework for non-humanoid agents to learn how to dance from human videos.
Our framework works in two processes: (1) training a reward model which perceives the relationship between optical flow and music.
Experiment results show that generated dance motion can align with the music beat properly.
arXiv Detail & Related papers (2024-05-30T06:43:55Z) - Automatic Dance Video Segmentation for Understanding Choreography [10.053913399613764]
We propose a method to automatically segment a dance video into each movement.
To build our training dataset, we annotate segmentation points to dance videos in the AIST Dance Video Database.
The evaluation study shows that the proposed method can estimate segmentation points with high accuracy.
arXiv Detail & Related papers (2024-05-30T06:19:01Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance [50.01162760878841]
We present DCM, a new multi-modal 3D dataset that combines camera movement with dance motion and music audio.
This dataset encompasses 108 dance sequences (3.2 hours) of paired dance-camera-music data from the anime community.
We propose DanceCamera3D, a transformer-based diffusion model that incorporates a novel body attention loss and a condition separation strategy.
arXiv Detail & Related papers (2024-03-20T15:24:57Z) - Dance with You: The Diversity Controllable Dancer Generation via
Diffusion Models [27.82646255903689]
We introduce a novel multi-dancer synthesis task called partner dancer generation.
The core of this task is to ensure the controllable diversity of the generated partner dancer.
To address the lack of multi-person datasets, we introduce AIST-M, a new dataset for partner dancer generation.
arXiv Detail & Related papers (2023-08-23T15:54:42Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - DanceIt: Music-inspired Dancing Video Synthesis [38.87762996956861]
We propose to reproduce such an inherent capability of the human-being within a computer vision system.
The proposed system consists of three modules.
The generated dancing videos match the content and rhythm of the music.
arXiv Detail & Related papers (2020-09-17T02:29:13Z) - Feel The Music: Automatically Generating A Dance For An Input Song [58.653867648572]
We present a general computational approach that enables a machine to generate a dance for any input music.
We encode intuitive, flexibles for what a 'good' dance is: the structure of the dance should align with the structure of the music.
arXiv Detail & Related papers (2020-06-21T20:29:50Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.