Dance with You: The Diversity Controllable Dancer Generation via
Diffusion Models
- URL: http://arxiv.org/abs/2308.13551v2
- Date: Mon, 4 Sep 2023 13:12:13 GMT
- Title: Dance with You: The Diversity Controllable Dancer Generation via
Diffusion Models
- Authors: Siyue Yao, Mingjie Sun, Bingliang Li, Fengyu Yang, Junle Wang, Ruimao
Zhang
- Abstract summary: We introduce a novel multi-dancer synthesis task called partner dancer generation.
The core of this task is to ensure the controllable diversity of the generated partner dancer.
To address the lack of multi-person datasets, we introduce AIST-M, a new dataset for partner dancer generation.
- Score: 27.82646255903689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, digital humans for interpersonal interaction in virtual
environments have gained significant attention. In this paper, we introduce a
novel multi-dancer synthesis task called partner dancer generation, which
involves synthesizing virtual human dancers capable of performing dance with
users. The task aims to control the pose diversity between the lead dancer and
the partner dancer. The core of this task is to ensure the controllable
diversity of the generated partner dancer while maintaining temporal
coordination with the lead dancer. This scenario varies from earlier research
in generating dance motions driven by music, as our emphasis is on
automatically designing partner dancer postures according to pre-defined
diversity, the pose of lead dancer, as well as the accompanying tunes. To
achieve this objective, we propose a three-stage framework called
Dance-with-You (DanY). Initially, we employ a 3D Pose Collection stage to
collect a wide range of basic dance poses as references for motion generation.
Then, we introduce a hyper-parameter that coordinates the similarity between
dancers by masking poses to prevent the generation of sequences that are
over-diverse or consistent. To avoid the rigidity of movements, we design a
Dance Pre-generated stage to pre-generate these masked poses instead of filling
them with zeros. After that, a Dance Motion Transfer stage is adopted with
leader sequences and music, in which a multi-conditional sampling formula is
rewritten to transfer the pre-generated poses into a sequence with a partner
style. In practice, to address the lack of multi-person datasets, we introduce
AIST-M, a new dataset for partner dancer generation, which is publicly
availiable. Comprehensive evaluations on our AIST-M dataset demonstrate that
the proposed DanY can synthesize satisfactory partner dancer results with
controllable diversity.
Related papers
- Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance [50.01162760878841]
We present DCM, a new multi-modal 3D dataset that combines camera movement with dance motion and music audio.
This dataset encompasses 108 dance sequences (3.2 hours) of paired dance-camera-music data from the anime community.
We propose DanceCamera3D, a transformer-based diffusion model that incorporates a novel body attention loss and a condition separation strategy.
arXiv Detail & Related papers (2024-03-20T15:24:57Z) - Controllable Group Choreography using Contrastive Diffusion [9.524877757674176]
Music-driven group choreography holds significant potential for a wide range of industrial applications.
We introduce a Group Contrastive Diffusion (GCD) strategy to enhance the connection between dancers and their group.
We demonstrate the effectiveness of our approach in producing visually captivating and consistent group dance motions.
arXiv Detail & Related papers (2023-10-29T11:59:12Z) - FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance
Generation [33.9261932800456]
FineDance is the largest music-dance paired dataset with the most dance genres.
To address monotonous and unnatural hand movements existing in previous methods, we propose a full-body dance generation network.
To further enhance the genre-matching and long-term stability of generated dances, we propose a Genre&Coherent aware Retrieval Module.
arXiv Detail & Related papers (2022-12-07T16:10:08Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.