The Music Meta Ontology: a flexible semantic model for the
interoperability of music metadata
- URL: http://arxiv.org/abs/2311.03942v1
- Date: Tue, 7 Nov 2023 12:35:15 GMT
- Title: The Music Meta Ontology: a flexible semantic model for the
interoperability of music metadata
- Authors: Jacopo de Berardinis, Valentina Anita Carriero, Albert
Mero\~no-Pe\~nuela, Andrea Poltronieri, Valentina Presutti
- Abstract summary: We introduce the Music Meta ontology to describe music metadata related to artists, compositions, performances, recordings, and links.
We provide a first evaluation of the model, alignments to other schemas, and support for data transformation.
- Score: 0.39373541926236766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The semantic description of music metadata is a key requirement for the
creation of music datasets that can be aligned, integrated, and accessed for
information retrieval and knowledge discovery. It is nonetheless an open
challenge due to the complexity of musical concepts arising from different
genres, styles, and periods -- standing to benefit from a lingua franca to
accommodate various stakeholders (musicologists, librarians, data engineers,
etc.). To initiate this transition, we introduce the Music Meta ontology, a
rich and flexible semantic model to describe music metadata related to artists,
compositions, performances, recordings, and links. We follow eXtreme Design
methodologies and best practices for data engineering, to reflect the
perspectives and the requirements of various stakeholders into the design of
the model, while leveraging ontology design patterns and accounting for
provenance at different levels (claims, links). After presenting the main
features of Music Meta, we provide a first evaluation of the model, alignments
to other schema (Music Ontology, DOREMUS, Wikidata), and support for data
transformation.
Related papers
- MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - MuPT: A Generative Symbolic Music Pretrained Transformer [73.47607237309258]
We explore the application of Large Language Models (LLMs) to the pre-training of music.
To address the challenges associated with misaligned measures from different tracks during generation, we propose a Synchronized Multi-Track ABC Notation (SMT-ABC Notation)
Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set.
arXiv Detail & Related papers (2024-04-09T15:35:52Z) - WikiMuTe: A web-sourced dataset of semantic descriptions for music audio [7.4327407361824935]
We present WikiMuTe, a new and open dataset containing rich semantic descriptions of music.
The data is sourced from Wikipedia's rich catalogue of articles covering musical works.
We train a model that jointly learns text and audio representations and performs cross-modal retrieval.
arXiv Detail & Related papers (2023-12-14T18:38:02Z) - From West to East: Who can understand the music of the others better? [91.78564268397139]
We leverage transfer learning methods to derive insights about similarities between different music cultures.
We use two Western music datasets, two traditional/folk datasets coming from eastern Mediterranean cultures, and two datasets belonging to Indian art music.
Three deep audio embedding models are trained and transferred across domains, including two CNN-based and a Transformer-based architecture, to perform auto-tagging for each target domain dataset.
arXiv Detail & Related papers (2023-07-19T07:29:14Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - A Dataset for Greek Traditional and Folk Music: Lyra [69.07390994897443]
This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data.
The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre.
arXiv Detail & Related papers (2022-11-21T14:15:43Z) - MATT: A Multiple-instance Attention Mechanism for Long-tail Music Genre
Classification [1.8275108630751844]
Imbalanced music genre classification is a crucial task in the Music Information Retrieval (MIR) field.
Most of the existing models are designed for class-balanced music datasets.
We propose a novel mechanism named Multi-instance Attention (MATT) to boost the performance for identifying tail classes.
arXiv Detail & Related papers (2022-09-09T03:52:44Z) - The HaMSE Ontology: Using Semantic Technologies to support Music
Representation Interoperability and Musicological Analysis [0.34265828682659694]
We propose HaMSE, an ontology capable of describing musical features that can assist musicological research.
To do this, HaMSE allows the alignment between different music representation systems and describes a set of musicological features that can allow the music analysis at different levels.
arXiv Detail & Related papers (2022-02-11T18:26:24Z) - Multi-task Learning with Metadata for Music Mood Classification [0.0]
Mood recognition is an important problem in music informatics and has key applications in music discovery and recommendation.
We propose a multi-task learning approach in which a shared model is simultaneously trained for mood and metadata prediction tasks.
Applying our technique on the existing state-of-the-art convolutional neural networks for mood classification improves their performances consistently.
arXiv Detail & Related papers (2021-10-10T11:36:34Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.