ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal
Production
- URL: http://arxiv.org/abs/2307.05328v1
- Date: Tue, 11 Jul 2023 15:19:47 GMT
- Title: ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal
Production
- Authors: Jackson Loth, Pedro Sarmento, CJ Carr, Zack Zukowski and Mathieu
Barthet
- Abstract summary: We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a custom dataset of 173 progressive metal songs.
Our model is able to generate multiple guitar, bass guitar, drums, piano and orchestral parts.
We demonstrate the value of the model by using it as a tool to create a progressive metal song, fully produced and mixed by a human metal producer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work in the field of symbolic music generation has shown value in
using a tokenization based on the GuitarPro format, a symbolic representation
supporting guitar expressive attributes, as an input and output representation.
We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a
custom dataset of 173 progressive metal songs, for the purposes of creating
compositions from that genre through a human-AI partnership. Our model is able
to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We
examine the validity of the generated music using a mixed methods approach by
combining quantitative analyses following a computational musicology paradigm
and qualitative analyses following a practice-based research paradigm. Finally,
we demonstrate the value of the model by using it as a tool to create a
progressive metal song, fully produced and mixed by a human metal producer
based on AI-generated music.
Related papers
- MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - Combinatorial music generation model with song structure graph analysis [18.71152526968065]
We construct a graph that uses information such as note sequence and instrument as node features, while the correlation between note sequences acts as the edge feature.
We trained a Graph Neural Network to obtain node representation in the graph, then we use node representation as input of Unet to generate CONLON pianoroll image latent.
arXiv Detail & Related papers (2023-12-24T04:09:30Z) - Simple and Controllable Music Generation [94.61958781346176]
MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens.
Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns.
arXiv Detail & Related papers (2023-06-08T15:31:05Z) - GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music
Generation with Transformers [14.025337055088102]
We use the DadaGP dataset for guitar tab music generation, a corpus of over 26k songs in GuitarPro and token formats.
We introduce methods to condition a Transformer-XL deep learning model to generate guitar tabs based on desired instrumentation and genre.
Results indicate that the GTR-CTRL methods provide more flexibility and control for guitar-focused symbolic music generation than an unconditioned model.
arXiv Detail & Related papers (2023-02-10T17:43:03Z) - Symphony Generation with Permutation Invariant Language Model [57.75739773758614]
We present a symbolic symphony music generation solution, SymphonyNet, based on a permutation invariant language model.
A novel transformer decoder architecture is introduced as backbone for modeling extra-long sequences of symphony tokens.
Our empirical results show that our proposed approach can generate coherent, novel, complex and harmonious symphony compared to human composition.
arXiv Detail & Related papers (2022-05-10T13:08:49Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models [25.15855175804765]
DadaGP is a new symbolic music dataset comprising 26,181 song scores in the GuitarPro format covering 739 musical genres.
DadaGP is released with an encoder/decoder which converts GuitarPro files to tokens and back.
We present results of a use case in which DadaGP is used to train a Transformer-based model to generate new songs in GuitarPro format.
arXiv Detail & Related papers (2021-07-30T14:21:36Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z) - The Jazz Transformer on the Front Line: Exploring the Shortcomings of
AI-composed Music through Quantitative Measures [36.49582705724548]
This paper presents the Jazz Transformer, a generative model that utilizes a neural sequence model called the Transformer-XL for modeling lead sheets of Jazz music.
We then conduct a series of computational analysis of the generated compositions from different perspectives.
Our work presents in an analytical manner why machine-generated music to date still falls short of the artwork of humanity, and sets some goals for future work on automatic composition to further pursue.
arXiv Detail & Related papers (2020-08-04T03:32:59Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.