The Music Note Ontology
- URL: http://arxiv.org/abs/2304.00986v1
- Date: Thu, 30 Mar 2023 10:51:10 GMT
- Title: The Music Note Ontology
- Authors: Andrea Poltronieri and Aldo Gangemi
- Abstract summary: Music Note Ontology is an ontology for modelling music notes and their realisation.
It addresses the relation between a note represented in a symbolic representation system, and its realisation.
- Score: 0.34265828682659694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we propose the Music Note Ontology, an ontology for modelling
music notes and their realisation. The ontology addresses the relation between
a note represented in a symbolic representation system, and its realisation,
i.e. a musical performance. This work therefore aims to solve the modelling and
representation issues that arise when analysing the relationships between
abstract symbolic features and the corresponding physical features of an audio
signal. The ontology is composed of three different Ontology Design Patterns
(ODP), which model the structure of the score (Score Part Pattern), the note in
the symbolic notation (Music Note Pattern) and its realisation (Musical Object
Pattern).
Related papers
- Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Iconic Gesture Semantics [87.00251241246136]
Informational evaluation is spelled out as extended exemplification (extemplification) in terms of perceptual classification of a gesture's visual iconic model.
We argue that the perceptual classification of instances of visual communication requires a notion of meaning different from Frege/Montague frameworks.
An iconic gesture semantics is introduced which covers the full range from gesture representations over model-theoretic evaluation to inferential interpretation in dynamic semantic frameworks.
arXiv Detail & Related papers (2024-04-29T13:58:03Z) - Combinatorial music generation model with song structure graph analysis [18.71152526968065]
We construct a graph that uses information such as note sequence and instrument as node features, while the correlation between note sequences acts as the edge feature.
We trained a Graph Neural Network to obtain node representation in the graph, then we use node representation as input of Unet to generate CONLON pianoroll image latent.
arXiv Detail & Related papers (2023-12-24T04:09:30Z) - Motif-Centric Representation Learning for Symbolic Music [5.781931021964343]
We learn the implicit relationship between motifs and their variations via representation learning.
A regularization-based method, VICReg, is adopted for pretraining, while contrastive learning is used for fine-tuning.
We visualize the acquired motif representations, offering an intuitive comprehension of the overall structure of a music piece.
arXiv Detail & Related papers (2023-09-19T13:09:03Z) - Score Transformer: Generating Musical Score from Note-level
Representation [2.3554584457413483]
We train the Transformer model to transcribe note-level representation into appropriate music notation.
We also explore an effective notation-level token representation to work with the model.
arXiv Detail & Related papers (2021-12-01T09:08:01Z) - Signal-domain representation of symbolic music for learning embedding
spaces [2.28438857884398]
We introduce a novel representation of symbolic music data, which transforms a polyphonic score into a continuous signal.
We show that our signal-like representation leads to better reconstruction and disentangled features.
arXiv Detail & Related papers (2021-09-08T06:36:02Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Embeddings as representation for symbolic music [0.0]
A representation technique that allows encoding music in a way that contains musical meaning would improve the results of any model trained for computer music tasks.
In this paper, we experiment with embeddings to represent musical notes from 3 different variations of a dataset and analyze if the model can capture useful musical patterns.
arXiv Detail & Related papers (2020-05-19T13:04:02Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.