Gen\'eLive! Generating Rhythm Actions in Love Live!
- URL: http://arxiv.org/abs/2202.12823v1
- Date: Fri, 25 Feb 2022 17:03:36 GMT
- Title: Gen\'eLive! Generating Rhythm Actions in Love Live!
- Authors: Atsushi Takada, Daichi Yamazaki, Likun Liu, Yudai Yoshida, Nyamkhuu
Ganbat, Takayuki Shimotomai, Taiga Yamamoto, Daisuke Sakurai, Naoki Hamada
- Abstract summary: A rhythm action game is a music-based video game in which the player is challenged to issue commands at the right timings during a music session.
Before this work, the company generated the charts manually, which resulted in a costly business operation.
This paper presents how KLab applied a deep generative model for synthesizing charts, and shows how it has improved the chart production process.
- Score: 1.3912598476882783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A rhythm action game is a music-based video game in which the player is
challenged to issue commands at the right timings during a music session. The
timings are rendered in the chart, which consists of visual symbols, called
notes, flying through the screen. KLab Inc., a Japan-based video game
developer, has operated rhythm action games including a title for the "Love
Live!" franchise, which became a hit across Asia and beyond. Before this work,
the company generated the charts manually, which resulted in a costly business
operation. This paper presents how KLab applied a deep generative model for
synthesizing charts, and shows how it has improved the chart production
process, reducing the business cost by half. Existing generative models
generated poor quality charts for easier difficulty modes. We report how we
overcame this challenge through a multi-scaling model dedicated to rhythm
actions, by considering beats among other things. Our model, named Gen\'eLive!,
is evaluated using production datasets at KLab as well as open datasets.
Related papers
- MusicFlow: Cascaded Flow Matching for Text Guided Music Generation [53.63948108922333]
MusicFlow is a cascaded text-to-music generation model based on flow matching.
We leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation.
arXiv Detail & Related papers (2024-10-27T15:35:41Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - Beat-Aligned Spectrogram-to-Sequence Generation of Rhythm-Game Charts [18.938897917126408]
We formulate chart generation as a sequence generation task and train a Transformer using a large dataset.
We also introduce tempo-informed preprocessing and training procedures, some of which are suggested to be integral for a successful training.
arXiv Detail & Related papers (2023-11-22T20:47:52Z) - Level generation for rhythm VR games [0.0]
Ragnarock is a virtual reality rhythm game in which you play a Viking captain competing in a longship race.
With two hammers, the task is to crush the incoming runes in sync with epic Viking music.
The creation of beat maps takes hours.
This work aims to automate the process of beat map creation, also known as the task of learning to choreograph.
arXiv Detail & Related papers (2023-04-13T20:24:51Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - A Graph-Based Method for Soccer Action Spotting Using Unsupervised
Player Classification [75.93186954061943]
Action spotting involves understanding the dynamics of the game, the complexity of events, and the variation of video sequences.
In this work, we focus on the former by (a) identifying and representing the players, referees, and goalkeepers as nodes in a graph, and by (b) modeling their temporal interactions as sequences of graphs.
For the player identification task, our method obtains an overall performance of 57.83% average-mAP by combining it with other modalities.
arXiv Detail & Related papers (2022-11-22T15:23:53Z) - TaikoNation: Patterning-focused Chart Generation for Rhythm Action Games [1.590611306750623]
Patterning is a key identifier of high quality rhythm game content, seen as a necessary component in human rankings.
We establish a new approach for chart generation that produces charts with more congruent, human-like patterning than seen in prior work.
arXiv Detail & Related papers (2021-07-26T22:55:57Z) - PopMAG: Pop Music Accompaniment Generation [190.09996798215738]
We propose a novel MUlti-track MIDI representation (MuMIDI) which enables simultaneous multi-track generation in a single sequence.
MuMIDI enlarges the sequence length and brings the new challenge of long-term music modeling.
We call our system for pop music accompaniment generation as PopMAG.
arXiv Detail & Related papers (2020-08-18T02:28:36Z) - Compositional Video Synthesis with Action Graphs [112.94651460161992]
Videos of actions are complex signals containing rich compositional structure in space and time.
We propose to represent the actions in a graph structure called Action Graph and present the new Action Graph To Video'' synthesis task.
Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.
arXiv Detail & Related papers (2020-06-27T09:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.