LyricJam Sonic: A Generative System for Real-Time Composition and
Musical Improvisation
- URL: http://arxiv.org/abs/2210.15638v1
- Date: Thu, 27 Oct 2022 17:27:58 GMT
- Title: LyricJam Sonic: A Generative System for Real-Time Composition and
Musical Improvisation
- Authors: Olga Vechtomova, Gaurav Sahu
- Abstract summary: LyricJam Sonic is a novel tool for musicians to rediscover previous recordings, re-contextualize them with other recordings, and create original live music compositions in real-time.
A bi-modal AI-driven approach uses generated lyric lines to find matching audio clips from the artist's past studio recordings.
The intent is to keep the artists in a state of creative flow to music creation rather than taking them into an analytical/critical state of deliberately searching for past audio segments.
- Score: 13.269034230828032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electronic music artists and sound designers have unique workflow practices
that necessitate specialized approaches for developing music information
retrieval and creativity support tools. Furthermore, electronic music
instruments, such as modular synthesizers, have near-infinite possibilities for
sound creation and can be combined to create unique and complex audio paths.
The process of discovering interesting sounds is often serendipitous and
impossible to replicate. For this reason, many musicians in electronic genres
record audio output at all times while they work in the studio. Subsequently,
it is difficult for artists to rediscover audio segments that might be suitable
for use in their compositions from thousands of hours of recordings. In this
paper, we describe LyricJam Sonic -- a novel creative tool for musicians to
rediscover their previous recordings, re-contextualize them with other
recordings, and create original live music compositions in real-time. A
bi-modal AI-driven approach uses generated lyric lines to find matching audio
clips from the artist's past studio recordings, and uses them to generate new
lyric lines, which in turn are used to find other clips, thus creating a
continuous and evolving stream of music and lyrics. The intent is to keep the
artists in a state of creative flow conducive to music creation rather than
taking them into an analytical/critical state of deliberately searching for
past audio segments. The system can run in either a fully autonomous mode
without user input, or in a live performance mode, where the artist plays live
music, while the system "listens" and creates a continuous stream of music and
lyrics in response.
Related papers
- MusicFlow: Cascaded Flow Matching for Text Guided Music Generation [53.63948108922333]
MusicFlow is a cascaded text-to-music generation model based on flow matching.
We leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation.
arXiv Detail & Related papers (2024-10-27T15:35:41Z) - MuVi: Video-to-Music Generation with Semantic Alignment and Rhythmic Synchronization [52.498942604622165]
This paper presents MuVi, a framework to generate music that aligns with video content.
MuVi analyzes video content through a specially designed visual adaptor to extract contextually and temporally relevant features.
We show that MuVi demonstrates superior performance in both audio quality and temporal synchronization.
arXiv Detail & Related papers (2024-10-16T18:44:56Z) - SongCreator: Lyrics-based Universal Song Generation [53.248473603201916]
SongCreator is a song-generation system designed to tackle the challenge of generating songs with both vocals and accompaniment given lyrics.
The model features two novel designs: a meticulously designed dual-sequence language model (M) to capture the information of vocals and accompaniment for song generation, and a series of attention mask strategies for DSLM.
Experiments demonstrate the effectiveness of SongCreator by achieving state-of-the-art or competitive performances on all eight tasks.
arXiv Detail & Related papers (2024-09-09T19:37:07Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Exploring Musical Roots: Applying Audio Embeddings to Empower Influence
Attribution for a Generative Music Model [6.476298483207895]
We develop a methodology to identify similar pieces of music audio in a manner that is useful for understanding training data attribution.
We compare the effect of applying CLMR and CLAP embeddings to similarity measurement in a set of 5 million audio clips used to train VampNet.
This work is to incorporate automated influence attribution into generative modeling, which promises to let model creators and users move from ignorant appropriation to informed creation.
arXiv Detail & Related papers (2024-01-25T22:20:42Z) - GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework [58.64512825534638]
Symbolic music generation aims to create musical notes, which can help users compose music.
We introduce a framework known as GETMusic, with GET'' standing for GEnerate music Tracks''
GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time.
Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.
arXiv Detail & Related papers (2023-05-18T09:53:23Z) - SingSong: Generating musical accompaniments from singing [35.819589427197464]
We present SingSong, a system that generates instrumental music to accompany input vocals.
In a pairwise comparison with the same vocal inputs, listeners expressed a significant preference for instrumentals generated by SingSong.
arXiv Detail & Related papers (2023-01-30T04:53:23Z) - Flat latent manifolds for music improvisation between human and machine [9.571383193449648]
We consider a music-generating algorithm as a counterpart to a human musician, in a setting where reciprocal improvisation is to lead to new experiences.
In the learned model, we generate novel musical sequences by quantification in latent space.
We provide empirical evidence for our method via a set of experiments on music and we deploy our model for an interactive jam session with a professional drummer.
arXiv Detail & Related papers (2022-02-23T09:00:17Z) - LyricJam: A system for generating lyrics for live instrumental music [11.521519161773288]
We describe a real-time system that receives a live audio stream from a jam session and generates lyric lines that are congruent with the live music being played.
Two novel approaches are proposed to align the learned latent spaces of audio and text representations.
arXiv Detail & Related papers (2021-06-03T16:06:46Z) - LoopNet: Musical Loop Synthesis Conditioned On Intuitive Musical
Parameters [12.72202888016628]
LoopNet is a feed-forward generative model for creating loops conditioned on intuitive parameters.
We leverage Music Information Retrieval (MIR) models as well as a large collection of public loop samples in our study.
arXiv Detail & Related papers (2021-05-21T14:24:34Z) - Foley Music: Learning to Generate Music from Videos [115.41099127291216]
Foley Music is a system that can synthesize plausible music for a silent video clip about people playing musical instruments.
We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings.
We present a Graph$-$Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements.
arXiv Detail & Related papers (2020-07-21T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.