Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure
- URL: http://arxiv.org/abs/2111.12159v1
- Date: Tue, 23 Nov 2021 21:26:31 GMT
- Title: Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure
- Authors: Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel
Cohen-Or, Ariel Shamir, Yiorgos Chrysanthou
- Abstract summary: We present a music-driven motion synthesis framework that generates long-term sequences of human motions synchronized with the input beats.
Our framework enables generation of diverse motions that are controlled by the content of the music, and not only by the beat.
- Score: 47.09425316677689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthesizing human motion with a global structure, such as a choreography, is
a challenging task. Existing methods tend to concentrate on local smooth pose
transitions and neglect the global context or the theme of the motion. In this
work, we present a music-driven motion synthesis framework that generates
long-term sequences of human motions which are synchronized with the input
beats, and jointly form a global structure that respects a specific dance
genre. In addition, our framework enables generation of diverse motions that
are controlled by the content of the music, and not only by the beat. Our
music-driven dance synthesis framework is a hierarchical system that consists
of three levels: pose, motif, and choreography. The pose level consists of an
LSTM component that generates temporally coherent sequences of poses. The motif
level guides sets of consecutive poses to form a movement that belongs to a
specific distribution using a novel motion perceptual-loss. And the
choreography level selects the order of the performed movements and drives the
system to follow the global structure of a dance genre. Our results demonstrate
the effectiveness of our music-driven framework to generate natural and
consistent movements on various dance types, having control over the content of
the synthesized motions, and respecting the overall structure of the dance.
Related papers
- Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - Bidirectional Autoregressive Diffusion Model for Dance Generation [26.449135437337034]
We propose a Bidirectional Autoregressive Diffusion Model (BADM) for music-to-dance generation.
A bidirectional encoder is built to enforce that the generated dance is harmonious in both the forward and backward directions.
To make the generated dance motion smoother, a local information decoder is built for local motion enhancement.
arXiv Detail & Related papers (2024-02-06T19:42:18Z) - Dance Style Transfer with Cross-modal Transformer [17.216186480300756]
CycleDance is a dance style transfer system to transform an existing motion clip in one dance style to a motion clip in another dance style.
Our method extends an existing CycleGAN architecture for modeling audio sequences and integrates multimodal transformer encoders to account for music context.
arXiv Detail & Related papers (2022-08-19T15:48:30Z) - BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis [123.73677487809418]
We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
arXiv Detail & Related papers (2022-07-20T18:03:54Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.