Task Grouping for Multilingual Text Recognition
- URL: http://arxiv.org/abs/2210.07423v1
- Date: Thu, 13 Oct 2022 23:54:23 GMT
- Title: Task Grouping for Multilingual Text Recognition
- Authors: Jing Huang, Kevin J Liang, Rama Kovvuri, Tal Hassner
- Abstract summary: We propose an automatic method for multilingual text recognition with a task grouping and assignment module using Gumbel-Softmax.
Experiments on MLT19 lend evidence to our hypothesis that there is a middle ground between combining every task together and separating every task that achieves a better configuration of task grouping/separation.
- Score: 28.036892501896983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing OCR methods focus on alphanumeric characters due to the
popularity of English and numbers, as well as their corresponding datasets. On
extending the characters to more languages, recent methods have shown that
training different scripts with different recognition heads can greatly improve
the end-to-end recognition accuracy compared to combining characters from all
languages in the same recognition head. However, we postulate that similarities
between some languages could allow sharing of model parameters and benefit from
joint training. Determining language groupings, however, is not immediately
obvious. To this end, we propose an automatic method for multilingual text
recognition with a task grouping and assignment module using Gumbel-Softmax,
introducing a task grouping loss and weighted recognition loss to allow for
simultaneous training of the models and grouping modules. Experiments on MLT19
lend evidence to our hypothesis that there is a middle ground between combining
every task together and separating every task that achieves a better
configuration of task grouping/separation.
Related papers
- Improving Multi-lingual Alignment Through Soft Contrastive Learning [9.454626745893798]
We propose a novel method to align multi-lingual embeddings based on the similarity of sentences measured by a pre-trained mono-lingual embedding model.
Given translation sentence pairs, we train a multi-lingual model in a way that the similarity between cross-lingual embeddings follows the similarity of sentences measured at the mono-lingual teacher model.
arXiv Detail & Related papers (2024-05-25T09:46:07Z) - FonMTL: Towards Multitask Learning for the Fon Language [1.9370453715137865]
We present the first explorative approach to multitask learning, for model capabilities enhancement in Natural Language Processing for the Fon language.
We leverage two language model heads as encoders to build shared representations for the inputs, and we use linear layers blocks for classification relative to each task.
Our results on the NER and POS tasks for Fon, show competitive (or better) performances compared to several multilingual pretrained language models finetuned on single tasks.
arXiv Detail & Related papers (2023-08-28T03:26:21Z) - Efficiently Aligned Cross-Lingual Transfer Learning for Conversational
Tasks using Prompt-Tuning [98.60739735409243]
Cross-lingual transfer of language models trained on high-resource languages like English has been widely studied for many NLP tasks.
We introduce XSGD for cross-lingual alignment pretraining, a parallel and large-scale multilingual conversation dataset.
To facilitate aligned cross-lingual representations, we develop an efficient prompt-tuning-based method for learning alignment prompts.
arXiv Detail & Related papers (2023-04-03T18:46:01Z) - CSSL-MHTR: Continual Self-Supervised Learning for Scalable Multi-script Handwritten Text Recognition [16.987008461171065]
We explore the potential of continual self-supervised learning to alleviate the catastrophic forgetting problem in handwritten text recognition.
Our method consists in adding intermediate layers called adapters for each task, and efficiently distilling knowledge from the previous model while learning the current task.
We attain state-of-the-art performance on English, Italian and Russian scripts, whilst adding only a few parameters per task.
arXiv Detail & Related papers (2023-03-16T14:27:45Z) - Bridging Cross-Lingual Gaps During Leveraging the Multilingual
Sequence-to-Sequence Pretraining for Text Generation [80.16548523140025]
We extend the vanilla pretrain-finetune pipeline with extra code-switching restore task to bridge the gap between the pretrain and finetune stages.
Our approach could narrow the cross-lingual sentence representation distance and improve low-frequency word translation with trivial computational cost.
arXiv Detail & Related papers (2022-04-16T16:08:38Z) - Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on
Spoken Language Understanding [101.24748444126982]
Decomposable tasks are complex and comprise of a hierarchy of sub-tasks.
Existing benchmarks, however, typically hold out examples for only the surface-level sub-task.
We propose a framework to construct robust test sets using coordinate ascent over sub-task specific utility functions.
arXiv Detail & Related papers (2021-06-29T02:53:59Z) - A Multiplexed Network for End-to-End, Multilingual OCR [20.818532124822713]
We propose an E2E approach, Multiplexed Multilingual Mask TextSpotter, that performs script identification at the word level and handles different scripts with different recognition heads.
Experiments show that our method outperforms the single-head model with similar number of parameters in end-to-end recognition tasks.
We believe that our work is a step towards the end-to-end trainable and scalable multilingual multi-purpose OCR system.
arXiv Detail & Related papers (2021-03-29T23:53:49Z) - FILTER: An Enhanced Fusion Method for Cross-lingual Language
Understanding [85.29270319872597]
We propose an enhanced fusion method that takes cross-lingual data as input for XLM finetuning.
During inference, the model makes predictions based on the text input in the target language and its translation in the source language.
To tackle this issue, we propose an additional KL-divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language.
arXiv Detail & Related papers (2020-09-10T22:42:15Z) - Pre-training via Paraphrasing [96.79972492585112]
We introduce MARGE, a pre-trained sequence-to-sequence model learned with an unsupervised multi-lingual paraphrasing objective.
We show it is possible to jointly learn to do retrieval and reconstruction, given only a random initialization.
For example, with no additional task-specific training we achieve BLEU scores of up to 35.8 for document translation.
arXiv Detail & Related papers (2020-06-26T14:43:43Z) - CoSDA-ML: Multi-Lingual Code-Switching Data Augmentation for Zero-Shot
Cross-Lingual NLP [68.2650714613869]
We propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT.
Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages.
arXiv Detail & Related papers (2020-06-11T13:15:59Z) - Learning not to Discriminate: Task Agnostic Learning for Improving
Monolingual and Code-switched Speech Recognition [12.354292498112347]
We present further improvements over our previous work by using domain adversarial learning to train task models.
Our proposed technique leads to reductions in Word Error Rates (WER) in monolingual and code-switched test sets across three language pairs.
arXiv Detail & Related papers (2020-06-09T13:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.