TaikoNation: Patterning-focused Chart Generation for Rhythm Action Games
- URL: http://arxiv.org/abs/2107.12506v1
- Date: Mon, 26 Jul 2021 22:55:57 GMT
- Title: TaikoNation: Patterning-focused Chart Generation for Rhythm Action Games
- Authors: Emily Halina and Matthew Guzdial
- Abstract summary: Patterning is a key identifier of high quality rhythm game content, seen as a necessary component in human rankings.
We establish a new approach for chart generation that produces charts with more congruent, human-like patterning than seen in prior work.
- Score: 1.590611306750623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating rhythm game charts from songs via machine learning has been a
problem of increasing interest in recent years. However, all existing systems
struggle to replicate human-like patterning: the placement of game objects in
relation to each other to form congruent patterns based on events in the song.
Patterning is a key identifier of high quality rhythm game content, seen as a
necessary component in human rankings. We establish a new approach for chart
generation that produces charts with more congruent, human-like patterning than
seen in prior work.
Related papers
- Beat-Aligned Spectrogram-to-Sequence Generation of Rhythm-Game Charts [18.938897917126408]
We formulate chart generation as a sequence generation task and train a Transformer using a large dataset.
We also introduce tempo-informed preprocessing and training procedures, some of which are suggested to be integral for a successful training.
arXiv Detail & Related papers (2023-11-22T20:47:52Z) - Graph-based Polyphonic Multitrack Music Generation [9.701208207491879]
This paper introduces a novel graph representation for music and a deep Variational Autoencoder that generates the structure and the content of musical graphs separately.
By separating the structure and content of musical graphs, it is possible to condition generation by specifying which instruments are played at certain times.
arXiv Detail & Related papers (2023-07-27T15:18:50Z) - Visualizing Ensemble Predictions of Music Mood [4.5383186433033735]
We show that visualization techniques can effectively convey the popular prediction as well as uncertainty at different music sections along the temporal axis.
We introduce a new variant of ThemeRiver, called "dual-flux ThemeRiver", which allows viewers to observe and measure the most popular prediction more easily.
arXiv Detail & Related papers (2021-12-14T18:13:21Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Graph Jigsaw Learning for Cartoon Face Recognition [79.29656077338828]
It is difficult to learn a shape-oriented representation for cartoon face recognition with convolutional neural networks (CNNs)
We propose the GraphJigsaw that constructs jigsaw puzzles at various stages in the classification network and solves the puzzles with the graph convolutional network (GCN) in a progressive manner.
Our proposed GraphJigsaw consistently outperforms other face recognition or jigsaw-based methods on two popular cartoon face datasets.
arXiv Detail & Related papers (2021-07-14T08:01:06Z) - A framework to compare music generative models using automatic
evaluation metrics extended to rhythm [69.2737664640826]
This paper takes the framework proposed in a previous research that did not consider rhythm to make a series of design decisions, then, rhythm support is added to evaluate the performance of two RNN memory cells in the creation of monophonic music.
The model considers the handling of music transposition and the framework evaluates the quality of the generated pieces using automatic quantitative metrics based on geometry which have rhythm support added as well.
arXiv Detail & Related papers (2021-01-19T15:04:46Z) - Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss [0.0]
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
arXiv Detail & Related papers (2020-11-25T23:22:12Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Artificial Neural Networks Jamming on the Beat [20.737171876839238]
The paper presents a large dataset of drum patterns alongside with corresponding melodies.
exploring a latent space of drum patterns one could generate new drum patterns with a given music style.
A simple artificial neural network could be trained to generate melodies corresponding with these drum patters used as inputs.
arXiv Detail & Related papers (2020-07-13T10:09:20Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.