Byte Pair Encoding for Symbolic Music
- URL: http://arxiv.org/abs/2301.11975v3
- Date: Mon, 13 Nov 2023 18:24:41 GMT
- Title: Byte Pair Encoding for Symbolic Music
- Authors: Nathan Fradet, Nicolas Gutowski, Fabien Chhel, Jean-Pierre Briot
- Abstract summary: Byte Pair embeddings significantly decreases the sequence length while increasing the vocabulary size.
We leverage the embedding capabilities of such models with more expressive tokens, resulting in both better results and faster inference in generation and classification tasks.
The source code is shared on Github, along with a companion website.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When used with deep learning, the symbolic music modality is often coupled
with language model architectures. To do so, the music needs to be tokenized,
i.e. converted into a sequence of discrete tokens. This can be achieved by
different approaches, as music can be composed of simultaneous tracks, of
simultaneous notes with several attributes. Until now, the proposed
tokenizations rely on small vocabularies of tokens describing the note
attributes and time events, resulting in fairly long token sequences, and a
sub-optimal use of the embedding space of language models. Recent research has
put efforts on reducing the overall sequence length by merging embeddings or
combining tokens. In this paper, we show that Byte Pair Encoding, a compression
technique widely used for natural language, significantly decreases the
sequence length while increasing the vocabulary size. By doing so, we leverage
the embedding capabilities of such models with more expressive tokens,
resulting in both better results and faster inference in generation and
classification tasks. The source code is shared on Github, along with a
companion website. Finally, BPE is directly implemented in MidiTok, allowing
the reader to easily benefit from this method.
Related papers
- Beyond Literal Token Overlap: Token Alignability for Multilinguality [53.680462160878925]
We propose subword token alignability as a new way to understand the impact and quality of multilingual tokenisation.
In particular, this metric predicts multilinguality much better when scripts are disparate and the overlap of literal tokens is low.
We recommend our subword token alignability metric for identifying optimal language pairs for cross-lingual transfer.
arXiv Detail & Related papers (2025-02-10T13:50:12Z) - SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator [65.62084602011596]
Large Language Models (LLMs) have exhibited exceptional performance across a spectrum of natural language processing tasks.
We have identified a key pattern: certain seemingly meaningless special tokens (i.e., separators) contribute disproportionately to attention scores compared to semantically meaningful tokens.
We introduce SepLLM, a plug-and-play framework that accelerates inference by compressing these segments and eliminating redundant tokens.
arXiv Detail & Related papers (2024-12-16T18:58:57Z) - Whisper-GPT: A Hybrid Representation Audio Large Language Model [1.2328446298523066]
A generative large language model (LLM) for speech and music that allows us to work with continuous audio representations and discrete tokens simultaneously as part of a single architecture.
We show how our architecture improves the perplexity and negative log-likelihood scores for the next token prediction compared to a token-based LLM for speech and music.
arXiv Detail & Related papers (2024-12-16T05:03:48Z) - Batching BPE Tokenization Merges [55.2480439325792]
BatchBPE is an open-source pure Python implementation of the Byte Pair algorithm.
It is used to train a high quality tokenizer on a basic laptop.
arXiv Detail & Related papers (2024-08-05T09:37:21Z) - Nested Music Transformer: Sequentially Decoding Compound Tokens in Symbolic Music and Audio Generation [2.668651175000492]
Representing symbolic music with compound tokens, where each token consists of several different sub-tokens, offers the advantage of reducing sequence length.
We introduce the Nested Music Transformer (NMT), an architecture tailored for decoding compound tokens autoregressively, similar to processing flattened tokens, but with low memory usage.
Experiment results showed that applying the NMT to compound tokens can enhance the performance in terms of better perplexity in processing various symbolic music datasets and discrete audio tokens from the MAESTRO dataset.
arXiv Detail & Related papers (2024-08-02T11:02:38Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - Toucan: Token-Aware Character Level Language Modeling [44.85590844938571]
Toucan is an augmentation to character-level models to make them "token-aware"
We show significant speed-ups in character generation without a loss in language modeling performance.
Our approach leads to a greater amount of longer sequences tokenized as single items.
arXiv Detail & Related papers (2023-11-15T00:57:51Z) - Linear-Time Modeling of Linguistic Structure: An Order-Theoretic
Perspective [97.57162770792182]
Tasks that model the relation between pairs of tokens in a string are a vital part of understanding natural language.
We show that these exhaustive comparisons can be avoided, and, moreover, the complexity can be reduced to linear by casting the relation between tokens as a partial order over the string.
Our method predicts real numbers for each token in a string in parallel and sorts the tokens accordingly, resulting in total orders of the tokens in the string.
arXiv Detail & Related papers (2023-05-24T11:47:35Z) - From Words to Music: A Study of Subword Tokenization Techniques in
Symbolic Music Generation [1.9188864062289432]
Subword tokenization has been widely successful in text-based natural language processing tasks with Transformer-based models.
We apply subword tokenization on post-musical tokenization schemes and find that it enables the generation of longer songs at the same time.
Our study suggests that subword tokenization is a promising technique for symbolic music generation and may have broader implications for music composition.
arXiv Detail & Related papers (2023-04-18T12:46:12Z) - Fast End-to-End Speech Recognition via a Non-Autoregressive Model and
Cross-Modal Knowledge Transferring from BERT [72.93855288283059]
We propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once)
The model consists of an encoder, a decoder, and a position dependent summarizer (PDS)
arXiv Detail & Related papers (2021-02-15T15:18:59Z) - Compound Word Transformer: Learning to Compose Full-Song Music over
Dynamic Directed Hypergraphs [34.976342712112476]
We present a conceptually different approach that takes into account the type of the tokens, such as note types and metric types.
We show that the resulting model can be viewed as a learner over dynamic directed hypergraphs.
Our experiment shows that, compared to state-of-the-art models, the proposed model converges 5--10 times faster at training.
arXiv Detail & Related papers (2021-01-07T06:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.