Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
- URL: http://arxiv.org/abs/2210.15445v3
- Date: Thu, 17 Aug 2023 13:49:08 GMT
- Title: Efficient Utilization of Large Pre-Trained Models for Low Resource ASR
- Authors: Peter Vieting, Christoph L\"uscher, Julian Dierkes, Ralf Schl\"uter,
Hermann Ney
- Abstract summary: We study a challenging low resource conversational telephony speech corpus from the medical domain in Vietnamese and German.
We show the benefits of using unsupervised techniques beyond simple fine-tuning of large pre-trained models.
- Score: 31.57758062484189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised representation learning has recently helped automatic speech
recognition (ASR) to tackle tasks with limited labeled data. Following this,
hardware limitations and applications give rise to the question how to take
advantage of large pre-trained models efficiently and reduce their complexity.
In this work, we study a challenging low resource conversational telephony
speech corpus from the medical domain in Vietnamese and German. We show the
benefits of using unsupervised techniques beyond simple fine-tuning of large
pre-trained models, discuss how to adapt them to a practical telephony task
including bandwidth transfer and investigate different data conditions for
pre-training and fine-tuning. We outperform the project baselines by 22%
relative using pretraining techniques. Further gains of 29% can be achieved by
refinements of architecture and training and 6% by adding 0.8 h of in-domain
adaptation data.
Related papers
- TAIL: Task-specific Adapters for Imitation Learning with Large
Pretrained Models [32.83440439290383]
We introduce TAIL (Task-specific Adapters for Learning), a framework for efficient adaptation to new control tasks.
Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques.
Our experiments in large-scale language-conditioned manipulation tasks suggest that TAIL with LoRA can achieve the best post-adaptation performance.
arXiv Detail & Related papers (2023-10-09T17:49:50Z) - GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks [73.88590165742721]
We propose a novel adversarial training technique that exploits auxiliary tasks under a limited set of training data.
Our approach extends single-task models into multi-task models during the min-max optimization of adversarial training.
We demonstrate that guided multi-task learning is an actionable and promising avenue to push further the boundaries of model robustness.
arXiv Detail & Related papers (2023-02-06T16:23:24Z) - Knowledge Distillation as Efficient Pre-training: Faster Convergence,
Higher Data-efficiency, and Better Transferability [53.27240222619834]
Knowledge Distillation as Efficient Pre-training aims to efficiently transfer the learned feature representation from pre-trained models to new student models for future downstream tasks.
Our method performs comparably with supervised pre-training counterparts in 3 downstream tasks and 9 downstream datasets requiring 10x less data and 5x less pre-training time.
arXiv Detail & Related papers (2022-03-10T06:23:41Z) - Efficient Adapter Transfer of Self-Supervised Speech Models for
Automatic Speech Recognition [0.1909808926064466]
Transformer based models such as wav2vec 2.0 and HuBERT are leading the field in the speech domain.
We propose applying adapters to wav2vec 2.0 to reduce the number of parameters required for downstream ASR tasks.
arXiv Detail & Related papers (2022-02-07T14:20:54Z) - Improving Neural Machine Translation by Denoising Training [95.96569884410137]
We present a simple and effective pretraining strategy Denoising Training DoT for neural machine translation.
We update the model parameters with source- and target-side denoising tasks at the early stage and then tune the model normally.
Experiments show DoT consistently improves the neural machine translation performance across 12 bilingual and 16 multilingual directions.
arXiv Detail & Related papers (2022-01-19T00:11:38Z) - BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning
for Automatic Speech Recognition [126.5605160882849]
We find that the combination of pre-training, self-training and scaling up model size greatly increases data efficiency.
We report on the universal benefits gained from using big pre-trained and self-trained models for a large set of downstream tasks.
arXiv Detail & Related papers (2021-09-27T17:59:19Z) - Self-Supervised Pretraining Improves Self-Supervised Pretraining [83.1423204498361]
Self-supervised pretraining requires expensive and lengthy computation, large amounts of data, and is sensitive to data augmentation.
This paper explores Hierarchical PreTraining (HPT), which decreases convergence time and improves accuracy by initializing the pretraining process with an existing pretrained model.
We show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pretraining data.
arXiv Detail & Related papers (2021-03-23T17:37:51Z) - Recognizing More Emotions with Less Data Using Self-supervised Transfer
Learning [0.0]
We propose a novel transfer learning method for speech emotion recognition.
With as low as 125 examples per emotion class, we were able to reach a higher accuracy than a strong baseline trained on 8 times more data.
arXiv Detail & Related papers (2020-11-11T06:18:31Z) - Don't Stop Pretraining: Adapt Language Models to Domains and Tasks [81.99843216550306]
We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks.
A second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains.
Adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining.
arXiv Detail & Related papers (2020-04-23T04:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.