Sequence-to-Sequence Piano Transcription with Transformers
- URL: http://arxiv.org/abs/2107.09142v1
- Date: Mon, 19 Jul 2021 20:33:09 GMT
- Title: Sequence-to-Sequence Piano Transcription with Transformers
- Authors: Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel
- Abstract summary: We show that equivalent performance can be achieved using a generic encoder-decoder Transformer with standard decoding methods.
We demonstrate that the model can learn to translate spectrogram inputs directly to MIDI-like output events for several transcription tasks.
- Score: 6.177271244427368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic Music Transcription has seen significant progress in recent years
by training custom deep neural networks on large datasets. However, these
models have required extensive domain-specific design of network architectures,
input/output representations, and complex decoding schemes. In this work, we
show that equivalent performance can be achieved using a generic
encoder-decoder Transformer with standard decoding methods. We demonstrate that
the model can learn to translate spectrogram inputs directly to MIDI-like
output events for several transcription tasks. This sequence-to-sequence
approach simplifies transcription by jointly modeling audio features and
language-like output dependencies, thus removing the need for task-specific
architectures. These results point toward possibilities for creating new Music
Information Retrieval models by focusing on dataset creation and labeling
rather than custom model design.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.