Modeling Bends in Popular Music Guitar Tablatures
- URL: http://arxiv.org/abs/2308.12307v1
- Date: Tue, 22 Aug 2023 07:50:58 GMT
- Title: Modeling Bends in Popular Music Guitar Tablatures
- Authors: Alexandre D'Hooge, Louis Bigo, Ken D\'eguernel
- Abstract summary: Tablature notation is widely used in popular music to transcribe and share guitar musical content.
This paper focuses on bends, which enable to progressively shift the pitch of a note, therefore circumventing physical limitations of the discrete fretted fingerboard.
Experiments are performed on a corpus of 932 lead guitar tablatures of popular music and show that a decision tree successfully predicts bend occurrences with an F1 score of 0.71 anda limited amount of false positive predictions.
- Score: 49.64902130083662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tablature notation is widely used in popular music to transcribe and share
guitar musical content. As a complement to standard score notation, tablatures
transcribe performance gesture information including finger positions and a
variety of guitar-specific playing techniques such as slides,
hammer-on/pull-off or bends.This paper focuses on bends, which enable to
progressively shift the pitch of a note, therefore circumventing physical
limitations of the discrete fretted fingerboard. In this paper, we propose a
set of 25 high-level features, computed for each note of the tablature, to
study how bend occurrences can be predicted from their past and future
short-term context. Experiments are performed on a corpus of 932 lead guitar
tablatures of popular music and show that a decision tree successfully predicts
bend occurrences with an F1 score of 0.71 anda limited amount of false positive
predictions, demonstrating promising applications to assist the arrangement of
non-guitar music into guitar tablatures.
Related papers
- TapToTab : Video-Based Guitar Tabs Generation using AI and Audio Analysis [0.0]
This paper introduces an advanced approach leveraging deep learning, specifically YOLO models for real-time fretboard detection.
Experimental results demonstrate substantial improvements in detection accuracy and robustness compared to traditional techniques.
This paper aims to revolutionize guitar instruction by automating the creation of guitar tabs from video recordings.
arXiv Detail & Related papers (2024-09-13T08:17:15Z) - MIDI-to-Tab: Guitar Tablature Inference via Masked Language Modeling [6.150307957212576]
We introduce a novel deep learning solution to symbolic guitar tablature estimation.
We train an encoder-decoder Transformer model in a masked language modeling paradigm to assign notes to strings.
The model is first pre-trained on DadaGP, a dataset of over 25K tablatures, and then fine-tuned on a curated set of professionally transcribed guitar performances.
arXiv Detail & Related papers (2024-08-09T12:25:23Z) - Guitar Chord Diagram Suggestion for Western Popular Music [43.58572466488356]
Chord diagrams are used by guitar players to show where and how to play a chord on the fretboard.
We show that some chord diagrams are over-represented in western popular music and that some chords can be played in more than 20 different ways.
We argue that taking context into account can improve the variety and the quality of chord diagram suggestion, and compare this approach with a model taking only the current chord label into account.
arXiv Detail & Related papers (2024-07-15T07:44:13Z) - From MIDI to Rich Tablatures: an Automatic Generative System incorporating Lead Guitarists' Fingering and Stylistic choices [42.362388367152256]
We propose a system that can generate, from simple MIDI melodies, tablatures enriched by fingerings, articulations, and expressive techniques.
The quality of the tablatures derived and the high configurability of the proposed approach can have several impacts.
arXiv Detail & Related papers (2024-07-12T07:18:24Z) - At Your Fingertips: Extracting Piano Fingering Instructions from Videos [45.643494669796866]
We consider the AI task of automating the extraction of fingering information from videos.
We show how to perform this task with high-accuracy using a combination of deep-learning modules.
We run the resulting system on 90 videos, resulting in high-quality piano fingering information of 150K notes.
arXiv Detail & Related papers (2023-03-07T09:09:13Z) - GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music
Generation with Transformers [14.025337055088102]
We use the DadaGP dataset for guitar tab music generation, a corpus of over 26k songs in GuitarPro and token formats.
We introduce methods to condition a Transformer-XL deep learning model to generate guitar tabs based on desired instrumentation and genre.
Results indicate that the GTR-CTRL methods provide more flexibility and control for guitar-focused symbolic music generation than an unconditioned model.
arXiv Detail & Related papers (2023-02-10T17:43:03Z) - A Data-Driven Methodology for Considering Feasibility and Pairwise
Likelihood in Deep Learning Based Guitar Tablature Transcription Systems [18.247508110198698]
In this work, symbolic tablature is leveraged to estimate the pairwise likelihood of notes on the guitar.
The output layer of a baseline tablature transcription model is reformulated, such that an inhibition loss can be incorporated to discourage the co-activation of unlikely note pairs.
This naturally enforces playability constraints for guitar, and yields tablature which is more consistent with the symbolic data used to estimate pairwise likelihoods.
arXiv Detail & Related papers (2022-04-17T22:10:37Z) - MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training [97.91071692716406]
Symbolic music understanding refers to the understanding of music from the symbolic data.
MusicBERT is a large-scale pre-trained model for music understanding.
arXiv Detail & Related papers (2021-06-10T10:13:05Z) - Music Gesture for Visual Sound Separation [121.36275456396075]
"Music Gesture" is a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music.
We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals.
arXiv Detail & Related papers (2020-04-20T17:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.