"Melatonin": A Case Study on AI-induced Musical Style
- URL: http://arxiv.org/abs/2208.08968v1
- Date: Thu, 18 Aug 2022 17:17:53 GMT
- Title: "Melatonin": A Case Study on AI-induced Musical Style
- Authors: Emmanuel Deruty, Maarten Grachten
- Abstract summary: "Melatonin" is a song produced by extensive use of BassNet, an AI tool originally designed to generate bass lines.
We identify style characteristics of the song in relation to the affordances of the tool, highlighting manifestations of style in terms of both idiom and sound.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although the use of AI tools in music composition and production is steadily
increasing, as witnessed by the newly founded AI song contest, analysis of
music produced using these tools is still relatively uncommon as a mean to gain
insight in the ways AI tools impact music production. In this paper we present
a case study of "Melatonin", a song produced by extensive use of BassNet, an AI
tool originally designed to generate bass lines. Through analysis of the
artists' work flow and song project, we identify style characteristics of the
song in relation to the affordances of the tool, highlighting manifestations of
style in terms of both idiom and sound.
Related papers
- SongCreator: Lyrics-based Universal Song Generation [53.248473603201916]
SongCreator is a song-generation system designed to tackle the challenge of generating songs with both vocals and accompaniment given lyrics.
The model features two novel designs: a meticulously designed dual-sequence language model (M) to capture the information of vocals and accompaniment for song generation, and a series of attention mask strategies for DSLM.
Experiments demonstrate the effectiveness of SongCreator by achieving state-of-the-art or competitive performances on all eight tasks.
arXiv Detail & Related papers (2024-09-09T19:37:07Z) - Play Me Something Icy: Practical Challenges, Explainability and the Semantic Gap in Generative AI Music [0.0]
This pictorial aims to critically consider the nature of text-to-audio and text-to-music generative tools in the context of explainable AI.
arXiv Detail & Related papers (2024-08-13T22:42:05Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - Automatic Time Signature Determination for New Scores Using Lyrics for
Latent Rhythmic Structure [0.0]
We propose a novel approach that only uses lyrics as input to automatically generate a fitting time signature for lyrical songs.
In this paper, the best of our experimental results reveal a 97.6% F1 score and a 0.996 Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) score.
arXiv Detail & Related papers (2023-11-27T01:44:02Z) - Performance Conditioning for Diffusion-Based Multi-Instrument Music
Synthesis [15.670399197114012]
We propose enhancing control of multi-instrument synthesis by conditioning a generative model on a specific performance and recording environment.
Performance conditioning is a tool indicating the generative model to synthesize music with style and timbre of specific instruments taken from specific performances.
Our prototype is evaluated using uncurated performances with diverse instrumentation and state-of-the-art FAD realism scores.
arXiv Detail & Related papers (2023-09-21T17:44:57Z) - A Survey of AI Music Generation Tools and Models [0.9421843976231371]
We classified music generation approaches into three categories: parameter-based, text-based, and visual-based classes.
Our survey highlights the diverse possibilities and functional features of these tools, which cater to a wide range of users.
Our survey offers critical insights into the underlying mechanisms and challenges of AI music generation.
arXiv Detail & Related papers (2023-08-24T00:49:08Z) - ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal
Production [0.0]
We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a custom dataset of 173 progressive metal songs.
Our model is able to generate multiple guitar, bass guitar, drums, piano and orchestral parts.
We demonstrate the value of the model by using it as a tool to create a progressive metal song, fully produced and mixed by a human metal producer.
arXiv Detail & Related papers (2023-07-11T15:19:47Z) - GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework [58.64512825534638]
Symbolic music generation aims to create musical notes, which can help users compose music.
We introduce a framework known as GETMusic, with GET'' standing for GEnerate music Tracks''
GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time.
Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.
arXiv Detail & Related papers (2023-05-18T09:53:23Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.