A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained
Models
- URL: http://arxiv.org/abs/2210.07111v1
- Date: Thu, 13 Oct 2022 15:47:09 GMT
- Title: A Multi-dimensional Evaluation of Tokenizer-free Multilingual Pretrained
Models
- Authors: Jimin Sun, Patrick Fernandes, Xinyi Wang, Graham Neubig
- Abstract summary: We show that subword-based models might still be the most practical choice in many settings.
We encourage future work in tokenizer-free methods to consider these factors when designing and evaluating new models.
- Score: 87.7086269902562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work on tokenizer-free multilingual pretrained models show promising
results in improving cross-lingual transfer and reducing engineering overhead
(Clark et al., 2022; Xue et al., 2022). However, these works mainly focus on
reporting accuracy on a limited set of tasks and data settings, placing less
emphasis on other important factors when tuning and deploying the models in
practice, such as memory usage, inference speed, and fine-tuning data
robustness. We attempt to fill this gap by performing a comprehensive empirical
comparison of multilingual tokenizer-free and subword-based models considering
these various dimensions. Surprisingly, we find that subword-based models might
still be the most practical choice in many settings, achieving better
performance for lower inference latency and memory usage. Based on these
results, we encourage future work in tokenizer-free methods to consider these
factors when designing and evaluating new models.
Related papers
- Ensembling Finetuned Language Models for Text Classification [55.15643209328513]
Finetuning is a common practice across different communities to adapt pretrained models to particular tasks.
ensembles of neural networks are typically used to boost performance and provide reliable uncertainty estimates.
We present a metadataset with predictions from five large finetuned models on six datasets and report results of different ensembling strategies.
arXiv Detail & Related papers (2024-10-25T09:15:54Z) - EmbedLLM: Learning Compact Representations of Large Language Models [28.49433308281983]
We propose EmbedLLM, a framework designed to learn compact vector representations of Large Language Models.
We introduce an encoder-decoder approach for learning such embeddings, along with a systematic framework to evaluate their effectiveness.
Empirical results show that EmbedLLM outperforms prior methods in model routing both in accuracy and latency.
arXiv Detail & Related papers (2024-10-03T05:43:24Z) - Collaborative decoding of critical tokens for boosting factuality of
large language models [57.504894664689]
Finetuned and aligned models show improved abilities of instruction following and safe generation.
The common practice of using sampling during generation also increases chances of hallucination.
We introduce a collaborative decoding framework to harness the high factuality within pretrained models through the concept of critical tokens.
arXiv Detail & Related papers (2024-02-28T01:53:37Z) - On the Analysis of Cross-Lingual Prompt Tuning for Decoder-based
Multilingual Model [49.81429697921861]
We study the interaction between parameter-efficient fine-tuning (PEFT) and cross-lingual tasks in multilingual autoregressive models.
We show that prompt tuning is more effective in enhancing the performance of low-resource languages than fine-tuning.
arXiv Detail & Related papers (2023-11-14T00:43:33Z) - Unleashing the Multilingual Encoder Potential: Boosting Zero-Shot
Performance via Probability Calibration [12.424785560515094]
Pretrained multilingual encoder models can directly perform zero-shot multilingual tasks or linguistic probing by reformulating the input examples into cloze-style prompts.
This method is limited by the model's bias toward predicting label words which frequently occurred during the pretraining.
We combine the models with calibration techniques which modify the probabilities of label words predicted by the models.
arXiv Detail & Related papers (2023-10-08T08:31:05Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Leveraging Synthetic Targets for Machine Translation [5.302421715411791]
We show that training models on synthetic targets outperforms training on the actual ground-truth data.
We provide preliminary analysis into whether this boost in performance is linked to ease of optimization or more deterministic nature of the predictions.
arXiv Detail & Related papers (2023-05-07T07:42:22Z) - Multi Task Learning For Zero Shot Performance Prediction of Multilingual
Models [12.759281077118567]
Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages.
We build upon some of the existing techniques for predicting the zero-shot performance on a task, by modeling it as a multi-task learning problem.
arXiv Detail & Related papers (2022-05-12T14:47:03Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.