Level generation for rhythm VR games
- URL: http://arxiv.org/abs/2304.06809v1
- Date: Thu, 13 Apr 2023 20:24:51 GMT
- Title: Level generation for rhythm VR games
- Authors: Mariia Rizhko
- Abstract summary: Ragnarock is a virtual reality rhythm game in which you play a Viking captain competing in a longship race.
With two hammers, the task is to crush the incoming runes in sync with epic Viking music.
The creation of beat maps takes hours.
This work aims to automate the process of beat map creation, also known as the task of learning to choreograph.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ragnarock is a virtual reality (VR) rhythm game in which you play a Viking
captain competing in a longship race. With two hammers, the task is to crush
the incoming runes in sync with epic Viking music. The runes are defined by a
beat map which the player can manually create. The creation of beat maps takes
hours. This work aims to automate the process of beat map creation, also known
as the task of learning to choreograph. The assignment is broken down into
three parts: determining the timing of the beats (action placement),
determining where in space the runes connected with the chosen beats should be
placed (action selection) and web-application creation. For the first task of
action placement, extraction of predominant local pulse (PLP) information from
music recordings is used. This approach allows to learn where and how many
beats are supposed to be placed. For the second task of action selection,
Recurrent Neural Networks (RNN) are used, specifically Gated recurrent unit
(GRU) to learn sequences of beats, and their patterns to be able to recreate
those rules and receive completely new levels. Then the last task was to build
a solution for non-technical players, the task was to combine the results of
the first and the second parts into a web application for easy use. For this
task the frontend was built using JavaScript and React and the backend - python
and FastAPI.
Related papers
- Beat-Aligned Spectrogram-to-Sequence Generation of Rhythm-Game Charts [18.938897917126408]
We formulate chart generation as a sequence generation task and train a Transformer using a large dataset.
We also introduce tempo-informed preprocessing and training procedures, some of which are suggested to be integral for a successful training.
arXiv Detail & Related papers (2023-11-22T20:47:52Z) - Video-Mined Task Graphs for Keystep Recognition in Instructional Videos [71.16703750980143]
Procedural activity understanding requires perceiving human actions in terms of a broader task.
We propose discovering a task graph automatically from how-to videos to represent probabilistically how people tend to execute keysteps.
We show the impact: more reliable zero-shot keystep localization and improved video representation learning.
arXiv Detail & Related papers (2023-07-17T18:19:36Z) - GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework [58.64512825534638]
Symbolic music generation aims to create musical notes, which can help users compose music.
We introduce a framework known as GETMusic, with GET'' standing for GEnerate music Tracks''
GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time.
Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.
arXiv Detail & Related papers (2023-05-18T09:53:23Z) - TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration [75.37311932218773]
We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities.
Our approach can generate realistic and coherent dance movements conditioned on both text and music while maintaining comparable performance with the two single modalities.
arXiv Detail & Related papers (2023-04-05T12:58:33Z) - Setting the rhythm scene: deep learning-based drum loop generation from
arbitrary language cues [0.0]
We present a novel method that generates 2 compasses of a 4-piece drum pattern that embodies the "mood" of a language cue.
We envision this tool as composition aid for electronic music and audiovisual soundtrack production, or an improvisation tool for live performance.
In order to produce the training samples for this model, besides manual annotation of the "scene" or "mood" terms, we have designed a novel method to extract the consensus drum track of any song.
arXiv Detail & Related papers (2022-09-20T21:53:35Z) - Gen\'eLive! Generating Rhythm Actions in Love Live! [1.3912598476882783]
A rhythm action game is a music-based video game in which the player is challenged to issue commands at the right timings during a music session.
Before this work, the company generated the charts manually, which resulted in a costly business operation.
This paper presents how KLab applied a deep generative model for synthesizing charts, and shows how it has improved the chart production process.
arXiv Detail & Related papers (2022-02-25T17:03:36Z) - Supervised Chorus Detection for Popular Music Using Convolutional Neural
Network and Multi-task Learning [10.160205869706965]
This paper presents a novel supervised approach to detecting the chorus segments in popular music.
We propose a convolutional neural network with a multi-task learning objective, which simultaneously fits two temporal activation curves.
We also propose a post-processing method that jointly takes into account the chorus and boundary predictions to produce binary output.
arXiv Detail & Related papers (2021-03-26T04:32:08Z) - Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss [0.0]
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
arXiv Detail & Related papers (2020-11-25T23:22:12Z) - Lets Play Music: Audio-driven Performance Video Generation [58.77609661515749]
We propose a new task named Audio-driven Per-formance Video Generation (APVG)
APVG aims to synthesize the video of a person playing a certain instrument guided by a given music audio clip.
arXiv Detail & Related papers (2020-11-05T03:13:46Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.