Automatic Time Signature Determination for New Scores Using Lyrics for
Latent Rhythmic Structure
- URL: http://arxiv.org/abs/2311.15480v2
- Date: Sun, 28 Jan 2024 19:53:22 GMT
- Title: Automatic Time Signature Determination for New Scores Using Lyrics for
Latent Rhythmic Structure
- Authors: Callie C. Liao, Duoduo Liao, Jesse Guessford
- Abstract summary: We propose a novel approach that only uses lyrics as input to automatically generate a fitting time signature for lyrical songs.
In this paper, the best of our experimental results reveal a 97.6% F1 score and a 0.996 Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) score.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There has recently been a sharp increase in interest in Artificial
Intelligence-Generated Content (AIGC). Despite this, musical components such as
time signatures have not been studied sufficiently to form an algorithmic
determination approach for new compositions, especially lyrical songs. This is
likely because of the neglect of musical details, which is critical for
constructing a robust framework. Specifically, time signatures establish the
fundamental rhythmic structure for almost all aspects of a song, including the
phrases and notes. In this paper, we propose a novel approach that only uses
lyrics as input to automatically generate a fitting time signature for lyrical
songs and uncover the latent rhythmic structure utilizing explainable machine
learning models. In particular, we devise multiple methods that are associated
with discovering lyrical patterns and creating new features that simultaneously
contain lyrical, rhythmic, and statistical information. In this approach, the
best of our experimental results reveal a 97.6% F1 score and a 0.996 Area Under
the Curve (AUC) of the Receiver Operating Characteristic (ROC) score. In
conclusion, our research directly generates time signatures from lyrics
automatically for new scores utilizing machine learning, which is an innovative
idea that approaches an understudied component of musicology and therefore
contributes significantly to the future of Artificial Intelligence (AI) music
generation.
Related papers
- Attention-guided Spectrogram Sequence Modeling with CNNs for Music Genre Classification [0.0]
We present an innovative model for classifying music genres using attention-based temporal signature modeling.
Our approach captures the most temporally significant moments within each piece, crafting a unique "signature" for genre identification.
This work bridges the gap between technical classification tasks and the nuanced, human experience of genre.
arXiv Detail & Related papers (2024-11-18T21:57:03Z) - MuDiT & MuSiT: Alignment with Colloquial Expression in Description-to-Song Generation [18.181382408551574]
We propose a novel task of Colloquial Description-to-Song Generation.
It focuses on aligning the generated content with colloquial human expressions.
This task is aimed at bridging the gap between colloquial language understanding and auditory expression within an AI model.
arXiv Detail & Related papers (2024-07-03T15:12:36Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - Multimodal Lyrics-Rhythm Matching [0.0]
We propose a novel multimodal lyrics-rhythm matching approach that specifically matches key components of lyrics and music with each other.
We use audio instead of sheet music with readily available metadata, which creates more challenges yet increases the application flexibility of our method.
Our experimental results reveal an 0.81 probability of matching on average, and around 30% of the songs have a probability of 0.9 or higher of keywords landing on strong beats.
arXiv Detail & Related papers (2023-01-06T22:24:53Z) - Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation [158.54649047794794]
Re-creation of Creations (ROC) is a new paradigm for lyric-to-melody generation.
ROC achieves good lyric-melody feature alignment in lyric-to-melody generation.
arXiv Detail & Related papers (2022-08-11T08:44:47Z) - A framework to compare music generative models using automatic
evaluation metrics extended to rhythm [69.2737664640826]
This paper takes the framework proposed in a previous research that did not consider rhythm to make a series of design decisions, then, rhythm support is added to evaluate the performance of two RNN memory cells in the creation of monophonic music.
The model considers the handling of music transposition and the framework evaluates the quality of the generated pieces using automatic quantitative metrics based on geometry which have rhythm support added as well.
arXiv Detail & Related papers (2021-01-19T15:04:46Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Music Generation with Temporal Structure Augmentation [0.0]
The proposed method augments a connectionist generation model with count-down to song conclusion and meter markers as extra input features.
An RNN architecture with LSTM cells is trained on the Nottingham folk music dataset in a supervised sequence learning setup.
Experiments show an improved prediction performance for both types of annotation.
arXiv Detail & Related papers (2020-04-21T19:19:58Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z) - Continuous Melody Generation via Disentangled Short-Term Representations
and Structural Conditions [14.786601824794369]
We present a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.
Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns.
arXiv Detail & Related papers (2020-02-05T06:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.