The HaMSE Ontology: Using Semantic Technologies to support Music
Representation Interoperability and Musicological Analysis
- URL: http://arxiv.org/abs/2202.05817v1
- Date: Fri, 11 Feb 2022 18:26:24 GMT
- Title: The HaMSE Ontology: Using Semantic Technologies to support Music
Representation Interoperability and Musicological Analysis
- Authors: Andrea Poltronieri and Aldo Gangemi
- Abstract summary: We propose HaMSE, an ontology capable of describing musical features that can assist musicological research.
To do this, HaMSE allows the alignment between different music representation systems and describes a set of musicological features that can allow the music analysis at different levels.
- Score: 0.34265828682659694
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The use of Semantic Technologies - in particular the Semantic Web - has
revealed to be a great tool for describing the cultural heritage domain and
artistic practices. However, the panorama of ontologies for musicological
applications seems to be limited and restricted to specific applications. In
this research, we propose HaMSE, an ontology capable of describing musical
features that can assist musicological research. More specifically, HaMSE
proposes to address sues that have been affecting musicological research for
decades: the representation of music and the relationship between quantitative
and qualitative data. To do this, HaMSE allows the alignment between different
music representation systems and describes a set of musicological features that
can allow the music analysis at different granularity levels.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - The Music Meta Ontology: a flexible semantic model for the
interoperability of music metadata [0.39373541926236766]
We introduce the Music Meta ontology to describe music metadata related to artists, compositions, performances, recordings, and links.
We provide a first evaluation of the model, alignments to other schemas, and support for data transformation.
arXiv Detail & Related papers (2023-11-07T12:35:15Z) - A Dataset for Greek Traditional and Folk Music: Lyra [69.07390994897443]
This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data.
The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre.
arXiv Detail & Related papers (2022-11-21T14:15:43Z) - Concept-Based Techniques for "Musicologist-friendly" Explanations in a
Deep Music Classifier [5.442298461804281]
We focus on more human-friendly explanations based on high-level musical concepts.
Our research targets trained systems (post-hoc explanations) and explores two approaches.
We demonstrate both techniques on an existing symbolic composer classification system, showcase their potential, and highlight their intrinsic limitations.
arXiv Detail & Related papers (2022-08-26T07:45:29Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis [91.3755431537592]
This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective.
The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone.
The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification.
arXiv Detail & Related papers (2020-02-01T17:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.