Feel The Music: Automatically Generating A Dance For An Input Song
- URL: http://arxiv.org/abs/2006.11905v2
- Date: Tue, 23 Jun 2020 20:11:23 GMT
- Title: Feel The Music: Automatically Generating A Dance For An Input Song
- Authors: Purva Tendulkar, Abhishek Das, Aniruddha Kembhavi, Devi Parikh
- Abstract summary: We present a general computational approach that enables a machine to generate a dance for any input music.
We encode intuitive, flexibles for what a 'good' dance is: the structure of the dance should align with the structure of the music.
- Score: 58.653867648572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a general computational approach that enables a machine to
generate a dance for any input music. We encode intuitive, flexible heuristics
for what a 'good' dance is: the structure of the dance should align with the
structure of the music. This flexibility allows the agent to discover creative
dances. Human studies show that participants find our dances to be more
creative and inspiring compared to meaningful baselines. We also evaluate how
perception of creativity changes based on different presentations of the dance.
Our code is available at https://github.com/purvaten/feel-the-music.
Related papers
- Lodge++: High-quality and Long Dance Generation with Vivid Choreography Patterns [48.54956784928394]
Lodge++ is a choreography framework to generate high-quality, ultra-long, and vivid dances given the music and desired genre.
To handle the challenges in computational efficiency, Lodge++ adopts a two-stage strategy to produce dances from coarse to fine.
Lodge++ is validated by extensive experiments, which show that our method can rapidly generate ultra-long dances suitable for various dance genres.
arXiv Detail & Related papers (2024-10-27T09:32:35Z) - Flexible Music-Conditioned Dance Generation with Style Description Prompts [41.04549275897979]
We introduce Flexible Dance Generation with Style Description Prompts (DGSDP), a diffusion-based framework suitable for diversified tasks of dance generation.
The core component of this framework is Music-Conditioned Style-Aware Diffusion (MCSAD), which comprises a Transformer-based network and a music Style Modulation module.
The proposed framework successfully generates realistic dance sequences that are accurately aligned with music for a variety of tasks such as long-term generation, dance in-betweening, dance inpainting, and etc.
arXiv Detail & Related papers (2024-06-12T04:55:14Z) - May the Dance be with You: Dance Generation Framework for Non-Humanoids [6.029154142020362]
We propose a framework for non-humanoid agents to learn how to dance from human videos.
Our framework works in two processes: (1) training a reward model which perceives the relationship between optical flow and music.
Experiment results show that generated dance motion can align with the music beat properly.
arXiv Detail & Related papers (2024-05-30T06:43:55Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic
Memory [92.81383016482813]
We propose a novel music-to-dance framework, Bailando, for driving 3D characters to dance following a piece of music.
We introduce an actor-critic Generative Pre-trained Transformer (GPT) that composes units to a fluent dance coherent to the music.
Our proposed framework achieves state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-24T13:06:43Z) - Music-to-Dance Generation with Optimal Transport [48.92483627635586]
We propose a Music-to-Dance with Optimal Transport Network (MDOT-Net) for learning to generate 3D dance choreographs from music.
We introduce an optimal transport distance for evaluating the authenticity of the generated dance distribution and a Gromov-Wasserstein distance to measure the correspondence between the dance distribution and the input music.
arXiv Detail & Related papers (2021-12-03T09:37:26Z) - DanceIt: Music-inspired Dancing Video Synthesis [38.87762996956861]
We propose to reproduce such an inherent capability of the human-being within a computer vision system.
The proposed system consists of three modules.
The generated dancing videos match the content and rhythm of the music.
arXiv Detail & Related papers (2020-09-17T02:29:13Z) - Learning to Generate Diverse Dance Motions with Transformer [67.43270523386185]
We introduce a complete system for dance motion synthesis.
A massive dance motion data set is created from YouTube videos.
A novel two-stream motion transformer generative model can generate motion sequences with high flexibility.
arXiv Detail & Related papers (2020-08-18T22:29:40Z) - Music2Dance: DanceNet for Music-driven Dance Generation [11.73506542921528]
We propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals.
We capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset.
arXiv Detail & Related papers (2020-02-02T17:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.