Vocabulary Transfer for Medical Texts
- URL: http://arxiv.org/abs/2208.02554v1
- Date: Thu, 4 Aug 2022 09:53:22 GMT
- Title: Vocabulary Transfer for Medical Texts
- Authors: Vladislav D. Mosin, Ivan P. Yamshchikov
- Abstract summary: vocabulary transfer is a subtask in which language models fine-tune with the corpus-specific tokenization instead of the default one.
We demonstrate that vocabulary transfer is especially beneficial for medical text processing.
- Score: 7.195824023358536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vocabulary transfer is a transfer learning subtask in which language models
fine-tune with the corpus-specific tokenization instead of the default one,
which is being used during pretraining. This usually improves the resulting
performance of the model, and in the paper, we demonstrate that vocabulary
transfer is especially beneficial for medical text processing. Using three
different medical natural language processing datasets, we show vocabulary
transfer to provide up to ten extra percentage points for the downstream
classifier accuracy.
Related papers
- Cross-Lingual Transfer from Related Languages: Treating Low-Resource
Maltese as Multilingual Code-Switching [9.435669487585917]
We focus on Maltese, a Semitic language, with substantial influences from Arabic, Italian, and English, and notably written in Latin script.
We present a novel dataset annotated with word-level etymology.
We show that conditional transliteration based on word etymology yields the best results, surpassing fine-tuning with raw Maltese or Maltese processed with non-selective pipelines.
arXiv Detail & Related papers (2024-01-30T11:04:36Z) - Don't lose the message while paraphrasing: A study on content preserving
style transfer [61.38460184163704]
Content preservation is critical for real-world applications of style transfer studies.
We compare various style transfer models on the example of the formality transfer domain.
We conduct a precise comparative study of several state-of-the-art techniques for style transfer.
arXiv Detail & Related papers (2023-08-17T15:41:08Z) - Direct Speech-to-speech Translation without Textual Annotation using
Bottleneck Features [13.44542301438426]
We propose a direct speech-to-speech translation model which can be trained without any textual annotation or content information.
Experiments on Mandarin-Cantonese speech translation demonstrate the feasibility of the proposed approach.
arXiv Detail & Related papers (2022-12-12T10:03:10Z) - Detecting Text Formality: A Study of Text Classification Approaches [78.11745751651708]
This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods.
We conducted three types of experiments -- monolingual, multilingual, and cross-lingual.
The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task.
arXiv Detail & Related papers (2022-04-19T16:23:07Z) - Oolong: Investigating What Makes Transfer Learning Hard with Controlled
Studies [21.350999136803843]
We systematically transform the language of the GLUE benchmark, altering one axis of crosslingual variation at a time.
We find that models can largely recover from syntactic-style shifts, but cannot recover from vocabulary misalignment.
Our experiments provide insights into the factors of cross-lingual transfer that researchers should most focus on when designing language transfer scenarios.
arXiv Detail & Related papers (2022-02-24T19:00:39Z) - Fine-Tuning Transformers: Vocabulary Transfer [0.30586855806896046]
Transformers are responsible for the vast majority of recent advances in natural language processing.
This paper studies if corpus-specific tokenization used for fine-tuning improves the resulting performance of the model.
arXiv Detail & Related papers (2021-12-29T14:22:42Z) - Transcribing Natural Languages for The Deaf via Neural Editing Programs [84.0592111546958]
We study the task of glossification, of which the aim is to em transcribe natural spoken language sentences for the Deaf (hard-of-hearing) community to ordered sign language glosses.
Previous sequence-to-sequence language models often fail to capture the rich connections between the two distinct languages, leading to unsatisfactory transcriptions.
We observe that despite different grammars, glosses effectively simplify sentences for the ease of deaf communication, while sharing a large portion of vocabulary with sentences.
arXiv Detail & Related papers (2021-12-17T16:21:49Z) - DEEP: DEnoising Entity Pre-training for Neural Machine Translation [123.6686940355937]
It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus.
We propose DEEP, a DEnoising Entity Pre-training method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences.
arXiv Detail & Related papers (2021-11-14T17:28:09Z) - AVocaDo: Strategy for Adapting Vocabulary to Downstream Domain [17.115865763783336]
We propose to consider the vocabulary as an optimizable parameter, allowing us to update the vocabulary by expanding it with domain-specific vocabulary.
We preserve the embeddings of the added words from overfitting to downstream data by utilizing knowledge learned from a pretrained language model with a regularization term.
arXiv Detail & Related papers (2021-10-26T06:26:01Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z) - Translation Artifacts in Cross-lingual Transfer Learning [51.66536640084888]
We show that machine translation can introduce subtle artifacts that have a notable impact in existing cross-lingual models.
In natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them.
We also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively.
arXiv Detail & Related papers (2020-04-09T17:54:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.