LooPy: A Research-Friendly Mix Framework for Music Information Retrieval
on Electronic Dance Music
- URL: http://arxiv.org/abs/2305.01051v1
- Date: Mon, 1 May 2023 19:30:47 GMT
- Title: LooPy: A Research-Friendly Mix Framework for Music Information Retrieval
on Electronic Dance Music
- Authors: Xinyu Li
- Abstract summary: We present a Python package for automated EDM audio generation as an infrastructure for MIR for EDM songs.
We provide a framework to build professional-level templates that could render a well-produced track from specified melody and chords.
Experiments show that our mixes could achieve the same quality of the original reference songs produced by world-famous artists.
- Score: 8.102989872457156
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Music information retrieval (MIR) has gone through an explosive development
with the advancement of deep learning in recent years. However, music genres
like electronic dance music (EDM) has always been relatively less investigated
compared to others. Considering its wide range of applications, we present a
Python package for automated EDM audio generation as an infrastructure for MIR
for EDM songs, to mitigate the difficulty of acquiring labelled data. It is a
convenient tool that could be easily concatenated to the end of many symbolic
music generation pipelines. Inside this package, we provide a framework to
build professional-level templates that could render a well-produced track from
specified melody and chords, or produce massive tracks given only a specific
key by our probabilistic symbolic melody generator. Experiments show that our
mixes could achieve the same quality of the original reference songs produced
by world-famous artists, with respect to both subjective and objective
criteria. Our code is accessible in this repository:
https://github.com/Gariscat/loopy and the official site of the project is also
online https://loopy4edm.com .
Related papers
- Benchmarking Sub-Genre Classification For Mainstage Dance Music [6.042939894766715]
This work introduces a novel benchmark comprising a new dataset and a baseline.
Our dataset extends the number of sub-genres to cover most recent mainstage live sets by top DJs worldwide in music festivals.
For the baseline, we developed deep learning models that outperform current state-of-the-art multimodel language models.
arXiv Detail & Related papers (2024-09-10T17:54:00Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - InstructME: An Instruction Guided Music Edit And Remix Framework with
Latent Diffusion Models [42.2977676825086]
In this paper, we develop InstructME, an Instruction guided Music Editing and remixing framework based on latent diffusion models.
Our framework fortifies the U-Net with multi-scale aggregation in order to maintain consistency before and after editing.
Our proposed method significantly surpasses preceding systems in music quality, text relevance and harmony.
arXiv Detail & Related papers (2023-08-28T07:11:42Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - Simple and Controllable Music Generation [94.61958781346176]
MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens.
Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns.
arXiv Detail & Related papers (2023-06-08T15:31:05Z) - GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework [58.64512825534638]
Symbolic music generation aims to create musical notes, which can help users compose music.
We introduce a framework known as GETMusic, with GET'' standing for GEnerate music Tracks''
GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time.
Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.
arXiv Detail & Related papers (2023-05-18T09:53:23Z) - Musika! Fast Infinite Waveform Music Generation [0.0]
We introduce Musika, a music generation system that can be trained on hundreds of hours of music using a single consumer GPU.
We achieve this by first learning a compact invertible representation of spectrogram magnitudes and phases with adversarial autoencoders.
A latent coordinate system enables generating arbitrarily long sequences of excerpts in parallel, while a global context vector allows the music to remain stylistically coherent through time.
arXiv Detail & Related papers (2022-08-18T08:31:15Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - PopMAG: Pop Music Accompaniment Generation [190.09996798215738]
We propose a novel MUlti-track MIDI representation (MuMIDI) which enables simultaneous multi-track generation in a single sequence.
MuMIDI enlarges the sequence length and brings the new challenge of long-term music modeling.
We call our system for pop music accompaniment generation as PopMAG.
arXiv Detail & Related papers (2020-08-18T02:28:36Z) - POP909: A Pop-song Dataset for Music Arrangement Generation [10.0454303747519]
We propose POP909, a dataset which contains multiple versions of the piano arrangements of 909 popular songs created by professional musicians.
The main body of the dataset contains the vocal melody, the lead instrument melody, and the piano accompaniment for each song in MIDI format, which are aligned to the original audio files.
We provide the annotations of tempo, beat, key, and chords, where the tempo curves are hand-labeled and others are done by MIR algorithms.
arXiv Detail & Related papers (2020-08-17T08:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.