BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis
- URL: http://arxiv.org/abs/2207.10120v2
- Date: Fri, 22 Jul 2022 13:02:35 GMT
- Title: BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis
- Authors: Davide Moltisanti, Jinyi Wu, Bo Dai, Chen Change Loy
- Abstract summary: We introduce a new dataset aiming to challenge common assumptions in dance motion synthesis.
We focus on breakdancing which features acrobatic moves and tangled postures.
Our efforts produced the BRACE dataset, which contains over 3 hours and 30 minutes of densely annotated poses.
- Score: 123.73677487809418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models for audio-conditioned dance motion synthesis map music
features to dance movements. Models are trained to associate motion patterns to
audio patterns, usually without an explicit knowledge of the human body. This
approach relies on a few assumptions: strong music-dance correlation,
controlled motion data and relatively simple poses and movements. These
characteristics are found in all existing datasets for dance motion synthesis,
and indeed recent methods can achieve good results.We introduce a new dataset
aiming to challenge these common assumptions, compiling a set of dynamic dance
sequences displaying complex human poses. We focus on breakdancing which
features acrobatic moves and tangled postures. We source our data from the Red
Bull BC One competition videos. Estimating human keypoints from these videos is
difficult due to the complexity of the dance, as well as the multiple moving
cameras recording setup. We adopt a hybrid labelling pipeline leveraging deep
estimation models as well as manual annotations to obtain good quality keypoint
sequences at a reduced cost. Our efforts produced the BRACE dataset, which
contains over 3 hours and 30 minutes of densely annotated poses. We test
state-of-the-art methods on BRACE, showing their limitations when evaluated on
complex sequences. Our dataset can readily foster advance in dance motion
synthesis. With intricate poses and swift movements, models are forced to go
beyond learning a mapping between modalities and reason more effectively about
body structure and movements.
Related papers
- Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment [87.20240797625648]
We introduce a novel task within the field of 3D dance generation, termed dance accompaniment.
It requires the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancer's movements and the underlying musical rhythm.
We propose a GPT-based model, Duolando, which autoregressively predicts the subsequent tokenized motion conditioned on the coordinated information of the music, the leader's and the follower's movements.
arXiv Detail & Related papers (2024-03-27T17:57:02Z) - DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance [50.01162760878841]
We present DCM, a new multi-modal 3D dataset that combines camera movement with dance motion and music audio.
This dataset encompasses 108 dance sequences (3.2 hours) of paired dance-camera-music data from the anime community.
We propose DanceCamera3D, a transformer-based diffusion model that incorporates a novel body attention loss and a condition separation strategy.
arXiv Detail & Related papers (2024-03-20T15:24:57Z) - QEAN: Quaternion-Enhanced Attention Network for Visual Dance Generation [6.060426136203966]
We propose a Quaternion-Enhanced Attention Network (QEAN) for visual dance synthesis from a quaternion perspective.
First, SPE embeds position information into self-attention in a rotational manner, leading to better learning of features of movement sequences and audio sequences.
Second, QRA represents and fuses 3D motion features and audio features in the form of a series of quaternions, enabling the model to better learn the temporal coordination of music and dance.
arXiv Detail & Related papers (2024-03-18T09:58:43Z) - DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation [89.50310360658791]
We present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation.
This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model.
We demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music.
arXiv Detail & Related papers (2023-08-05T16:18:57Z) - Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure [47.09425316677689]
We present a music-driven motion synthesis framework that generates long-term sequences of human motions synchronized with the input beats.
Our framework enables generation of diverse motions that are controlled by the content of the music, and not only by the beat.
arXiv Detail & Related papers (2021-11-23T21:26:31Z) - Transflower: probabilistic autoregressive dance generation with
multimodal attention [31.308435764603658]
We present a novel probabilistic autoregressive architecture that models the distribution over future poses with a normalizing flow conditioned on previous poses as well as music context.
Second, we introduce the currently largest 3D dance-motion dataset, obtained with a variety of motion-capture technologies, and including both professional and casual dancers.
arXiv Detail & Related papers (2021-06-25T20:14:28Z) - DanceFormer: Music Conditioned 3D Dance Generation with Parametric
Motion Transformer [23.51701359698245]
In this paper, we reformulate it by a two-stage process, ie, a key pose generation and then an in-between parametric motion curve prediction.
We propose a large-scale music conditioned 3D dance dataset, called PhantomDance, that is accurately labeled by experienced animators.
Experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances.
arXiv Detail & Related papers (2021-03-18T12:17:38Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.