Controllable Group Choreography using Contrastive Diffusion
- URL: http://arxiv.org/abs/2310.18986v2
- Date: Fri, 3 Nov 2023 13:24:43 GMT
- Title: Controllable Group Choreography using Contrastive Diffusion
- Authors: Nhat Le, Tuong Do, Khoa Do, Hien Nguyen, Erman Tjiputra, Quang D.
Tran, Anh Nguyen
- Abstract summary: Music-driven group choreography holds significant potential for a wide range of industrial applications.
We introduce a Group Contrastive Diffusion (GCD) strategy to enhance the connection between dancers and their group.
We demonstrate the effectiveness of our approach in producing visually captivating and consistent group dance motions.
- Score: 9.524877757674176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Music-driven group choreography poses a considerable challenge but holds
significant potential for a wide range of industrial applications. The ability
to generate synchronized and visually appealing group dance motions that are
aligned with music opens up opportunities in many fields such as entertainment,
advertising, and virtual performances. However, most of the recent works are
not able to generate high-fidelity long-term motions, or fail to enable
controllable experience. In this work, we aim to address the demand for
high-quality and customizable group dance generation by effectively governing
the consistency and diversity of group choreographies. In particular, we
utilize a diffusion-based generative approach to enable the synthesis of
flexible number of dancers and long-term group dances, while ensuring coherence
to the input music. Ultimately, we introduce a Group Contrastive Diffusion
(GCD) strategy to enhance the connection between dancers and their group,
presenting the ability to control the consistency or diversity level of the
synthesized group animation via the classifier-guidance sampling technique.
Through intensive experiments and evaluation, we demonstrate the effectiveness
of our approach in producing visually captivating and consistent group dance
motions. The experimental results show the capability of our method to achieve
the desired levels of consistency and diversity, while maintaining the overall
quality of the generated group choreography. The source code can be found at
https://aioz-ai.github.io/GCD
Related papers
- Lodge++: High-quality and Long Dance Generation with Vivid Choreography Patterns [48.54956784928394]
Lodge++ is a choreography framework to generate high-quality, ultra-long, and vivid dances given the music and desired genre.
To handle the challenges in computational efficiency, Lodge++ adopts a two-stage strategy to produce dances from coarse to fine.
Lodge++ is validated by extensive experiments, which show that our method can rapidly generate ultra-long dances suitable for various dance genres.
arXiv Detail & Related papers (2024-10-27T09:32:35Z) - Scalable Group Choreography via Variational Phase Manifold Learning [8.504657927912076]
We propose a phase-based variational generative model for group dance generation on learning a generative manifold.
Our method achieves high-fidelity group dance motion and enables the generation with an unlimited number of dancers.
arXiv Detail & Related papers (2024-07-26T16:02:37Z) - Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - Dance with You: The Diversity Controllable Dancer Generation via
Diffusion Models [27.82646255903689]
We introduce a novel multi-dancer synthesis task called partner dancer generation.
The core of this task is to ensure the controllable diversity of the generated partner dancer.
To address the lack of multi-person datasets, we introduce AIST-M, a new dataset for partner dancer generation.
arXiv Detail & Related papers (2023-08-23T15:54:42Z) - Music-Driven Group Choreography [10.501572863039852]
$rm AIOZ-GDANCE$ is a new large-scale dataset for music-driven group dance generation.
We show that naively applying single dance generation technique to creating group dance motion may lead to unsatisfactory results.
We propose a new method that takes an input music sequence and a set of 3D positions of dancers to efficiently produce multiple group-coherent choreographies.
arXiv Detail & Related papers (2023-03-22T06:26:56Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.