Music Embedding: A Tool for Incorporating Music Theory into
Computational Music Applications
- URL: http://arxiv.org/abs/2104.11880v1
- Date: Sat, 24 Apr 2021 04:32:45 GMT
- Title: Music Embedding: A Tool for Incorporating Music Theory into
Computational Music Applications
- Authors: SeyyedPooya HekmatiAthar and Mohd Anwar
- Abstract summary: It is important to digitally represent music in a music theoretic and concise manner.
Existing approaches for representing music are ineffective in terms of utilizing music theory.
- Score: 0.3553493344868413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in the digital technologies have enabled researchers to develop
a variety of Computational Music applications. Such applications are required
to capture, process, and generate data related to music. Therefore, it is
important to digitally represent music in a music theoretic and concise manner.
Existing approaches for representing music are ineffective in terms of
utilizing music theory. In this paper, we address the disjoint of music theory
and computational music by developing an opensource representation tool based
on music theory. Through the wide range of use cases, we run an analysis on the
classical music pieces to show the usefulness of the developed music embedding.
Related papers
- Do Music Generation Models Encode Music Theory? [10.987131058422742]
We introduce SynTheory, a synthetic MIDI and audio music theory dataset consisting of tempos, time signatures, notes, intervals, scales, chords, and chord progressions concepts.
We then propose a framework to probe for these music theory concepts in music foundation models and assess how strongly they encode these concepts within their internal representations.
Our findings suggest that music theory concepts are discernible within foundation models and that the degree to which they are detectable varies by model size and layer.
arXiv Detail & Related papers (2024-10-01T17:06:30Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - A Dataset and Baselines for Measuring and Predicting the Music Piece Memorability [16.18336216092687]
We focus on measuring and predicting music memorability.
We train baselines to predict and analyze music memorability.
We demonstrate that while there is room for improvement, predicting music memorability with limited data is possible.
arXiv Detail & Related papers (2024-05-21T14:57:04Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [56.09299510129221]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - GETMusic: Generating Any Music Tracks with a Unified Representation and
Diffusion Framework [58.64512825534638]
Symbolic music generation aims to create musical notes, which can help users compose music.
We introduce a framework known as GETMusic, with GET'' standing for GEnerate music Tracks''
GETScore represents musical notes as tokens and organizes tokens in a 2D structure, with tracks stacked vertically and progressing horizontally over time.
Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.
arXiv Detail & Related papers (2023-05-18T09:53:23Z) - A Dataset for Greek Traditional and Folk Music: Lyra [69.07390994897443]
This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data.
The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre.
arXiv Detail & Related papers (2022-11-21T14:15:43Z) - Models of Music Cognition and Composition [0.0]
We first motivate why music is relevant to cognitive scientists and give an overview of the approaches to computational modelling of music cognition.
We then review literature on the various models of music perception, including non-computational models, computational non-cognitive models and computational cognitive models.
arXiv Detail & Related papers (2022-08-14T16:27:59Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z) - Adaptive music: Automated music composition and distribution [0.0]
We present Melomics: an algorithmic composition method based on evolutionary search.
The system has exhibited a high creative power and versatility to produce music of different types.
It has also enabled the emergence of a set of completely novel applications.
arXiv Detail & Related papers (2020-07-25T09:38:06Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.