A Vocabulary-Free Multilingual Neural Tokenizer for End-to-End Task
Learning
- URL: http://arxiv.org/abs/2204.10815v1
- Date: Fri, 22 Apr 2022 16:50:49 GMT
- Title: A Vocabulary-Free Multilingual Neural Tokenizer for End-to-End Task
Learning
- Authors: Md Mofijul Islam, Gustavo Aguilar, Pragaash Ponnusamy, Clint Solomon
Mathialagan, Chengyuan Ma, Chenlei Guo
- Abstract summary: Subword tokenization is a commonly used input pre-processing step in most recent NLP models.
We propose a vocabulary-free neural tokenizer by distilling segmentation information from subword tokenization.
Our tokenizer consistently improves performance on multilingual (NLI) and code-switching (sentiment analysis) tasks.
- Score: 8.052271364177988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subword tokenization is a commonly used input pre-processing step in most
recent NLP models. However, it limits the models' ability to leverage
end-to-end task learning. Its frequency-based vocabulary creation compromises
tokenization in low-resource languages, leading models to produce suboptimal
representations. Additionally, the dependency on a fixed vocabulary limits the
subword models' adaptability across languages and domains. In this work, we
propose a vocabulary-free neural tokenizer by distilling segmentation
information from heuristic-based subword tokenization. We pre-train our
character-based tokenizer by processing unique words from multilingual corpus,
thereby extensively increasing word diversity across languages. Unlike the
predefined and fixed vocabularies in subword methods, our tokenizer allows
end-to-end task learning, resulting in optimal task-specific tokenization. The
experimental results show that replacing the subword tokenizer with our neural
tokenizer consistently improves performance on multilingual (NLI) and
code-switching (sentiment analysis) tasks, with larger gains in low-resource
languages. Additionally, our neural tokenizer exhibits a robust performance on
downstream tasks when adversarial noise is present (typos and misspelling),
further increasing the initial improvements over statistical subword
tokenizers.
Related papers
- MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization [81.83460411131931]
In multilingual settings, non-Latin scripts and low-resource languages are usually disadvantaged in terms of language models' utility, efficiency, and cost.
We propose multilingual adaptive gradient-based tokenization to reduce over-segmentation via adaptive gradient-based subword tokenization.
arXiv Detail & Related papers (2024-07-11T18:59:21Z) - An Analysis of BPE Vocabulary Trimming in Neural Machine Translation [56.383793805299234]
vocabulary trimming is a postprocessing step that replaces rare subwords with their component subwords.
We show that vocabulary trimming fails to improve performance and is even prone to incurring heavy degradation.
arXiv Detail & Related papers (2024-03-30T15:29:49Z) - Rethinking Tokenization: Crafting Better Tokenizers for Large Language
Models [0.0]
Tokenization significantly influences language models(LMs)' performance.
This paper traces the evolution of tokenizers from word-level to subword-level, analyzing how they balance tokens and types.
The Less-is-Better (LiB) model could be a new approach for LLM tokenizer.
arXiv Detail & Related papers (2024-03-01T10:03:07Z) - Task-Adaptive Tokenization: Enhancing Long-Form Text Generation Efficacy
in Mental Health and Beyond [66.07002187192448]
We propose task-adaptive tokenization as a way to adapt the generation pipeline to the specifics of a downstream task.
We introduce a strategy for building a specialized vocabulary and introduce a vocabulary merging protocol.
We find that our task-adaptive tokenization approach brings a significant improvement in generation performance while using up to 60% fewer tokens.
arXiv Detail & Related papers (2023-10-09T00:20:59Z) - Tokenization Impacts Multilingual Language Modeling: Assessing
Vocabulary Allocation and Overlap Across Languages [3.716965622352967]
We propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers.
Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks.
arXiv Detail & Related papers (2023-05-26T18:06:49Z) - CompoundPiece: Evaluating and Improving Decompounding Performance of
Language Models [77.45934004406283]
We systematically study decompounding, the task of splitting compound words into their constituents.
We introduce a dataset of 255k compound and non-compound words across 56 diverse languages obtained from Wiktionary.
We introduce a novel methodology to train dedicated models for decompounding.
arXiv Detail & Related papers (2023-05-23T16:32:27Z) - Semantic Tokenizer for Enhanced Natural Language Processing [32.605667552915854]
We present a novel tokenizer that uses semantics to drive vocabulary construction.
The tokenizer more than doubles the number of wordforms represented in the vocabulary.
arXiv Detail & Related papers (2023-04-24T19:33:41Z) - Impact of Tokenization on Language Models: An Analysis for Turkish [2.4660652494309936]
We train tokenizers and pretrain medium-sized language models using RoBERTa pretraining procedure on the Turkish split of the OSCAR corpus.
Our experiments, supported by statistical tests, reveal that Morphological-level tokenizer has challenging performance with de facto tokenizers.
We find that increasing the vocabulary size improves the performance of Morphological and Word-level tokenizers more than that of de facto tokenizers.
arXiv Detail & Related papers (2022-04-19T12:01:46Z) - Between words and characters: A Brief History of Open-Vocabulary
Modeling and Tokenization in NLP [22.772546707304766]
We show how hybrid approaches of words and characters as well as subword-based approaches based on learned segmentation have been proposed and evaluated.
We conclude that there is and likely will never be a silver bullet singular solution for all applications.
arXiv Detail & Related papers (2021-12-20T13:04:18Z) - Charformer: Fast Character Transformers via Gradient-based Subword
Tokenization [50.16128796194463]
We propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model.
We introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters.
We additionally introduce Charformer, a deep Transformer model that integrates GBST and operates on the byte level.
arXiv Detail & Related papers (2021-06-23T22:24:14Z) - On the Importance of Word Order Information in Cross-lingual Sequence
Labeling [80.65425412067464]
Cross-lingual models that fit into the word order of the source language might fail to handle target languages.
We investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages.
arXiv Detail & Related papers (2020-01-30T03:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.