Knowledge Transfer from Pre-trained Language Models to Cif-based Speech
Recognizers via Hierarchical Distillation
- URL: http://arxiv.org/abs/2301.13003v2
- Date: Sun, 28 May 2023 16:13:52 GMT
- Title: Knowledge Transfer from Pre-trained Language Models to Cif-based Speech
Recognizers via Hierarchical Distillation
- Authors: Minglun Han, Feilong Chen, Jing Shi, Shuang Xu, Bo Xu
- Abstract summary: Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks.
We propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models.
Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets.
- Score: 22.733285434532068
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale pre-trained language models (PLMs) have shown great potential in
natural language processing tasks. Leveraging the capabilities of PLMs to
enhance automatic speech recognition (ASR) systems has also emerged as a
promising research direction. However, previous works may be limited by the
inflexible structures of PLMs and the insufficient utilization of PLMs. To
alleviate these problems, we propose the hierarchical knowledge distillation
(HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer
knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge
distillation with contrastive loss at the acoustic level and knowledge
distillation with regression loss at the linguistic level. Compared with the
original CIF-based model, our method achieves 15% and 9% relative error rate
reduction on the AISHELL-1 and LibriSpeech datasets, respectively.
Related papers
- An Effective Automated Speaking Assessment Approach to Mitigating Data Scarcity and Imbalanced Distribution [5.1660803395535835]
Self-supervised learning (SSL) has shown stellar performance compared to traditional methods.
However, SSL-based ASA systems are faced with at least three data-related challenges.
These challenges include limited annotated data, uneven distribution of learner proficiency levels and non-uniform score intervals between different CEFR proficiency levels.
arXiv Detail & Related papers (2024-04-11T09:06:49Z) - It's Never Too Late: Fusing Acoustic Information into Large Language
Models for Automatic Speech Recognition [70.77292069313154]
Large language models (LLMs) can be successfully used for generative error correction (GER) on top of the automatic speech recognition (ASR) output.
In this work, we aim to overcome such a limitation by infusing acoustic information before generating the predicted transcription through a novel late fusion solution termed Uncertainty-Aware Dynamic Fusion (UADF)
arXiv Detail & Related papers (2024-02-08T07:21:45Z) - Large Language Models are Efficient Learners of Noise-Robust Speech
Recognition [65.95847272465124]
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR)
In this work, we extend the benchmark to noisy conditions and investigate if we can teach LLMs to perform denoising for GER.
Experiments on various latest LLMs demonstrate our approach achieves a new breakthrough with up to 53.9% correction improvement in terms of word error rate.
arXiv Detail & Related papers (2024-01-19T01:29:27Z) - HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models [81.56455625624041]
We introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction.
The proposed benchmark contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs of N-best hypotheses.
LLMs with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list.
arXiv Detail & Related papers (2023-09-27T14:44:10Z) - Exploring the Integration of Large Language Models into Automatic Speech
Recognition Systems: An Empirical Study [0.0]
This paper explores the integration of Large Language Models (LLMs) into Automatic Speech Recognition (ASR) systems.
Our primary focus is to investigate the potential of using an LLM's in-context learning capabilities to enhance the performance of ASR systems.
arXiv Detail & Related papers (2023-07-13T02:31:55Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z) - Sequence-level self-learning with multiple hypotheses [53.04725240411895]
We develop new self-learning techniques with an attention-based sequence-to-sequence (seq2seq) model for automatic speech recognition (ASR)
In contrast to conventional unsupervised learning approaches, we adopt the emphmulti-task learning (MTL) framework.
Our experiment results show that our method can reduce the WER on the British speech data from 14.55% to 10.36% compared to the baseline model trained with the US English data only.
arXiv Detail & Related papers (2021-12-10T20:47:58Z) - Knowledge distillation from language model to acoustic model: a
hierarchical multi-task learning approach [12.74181185088531]
Cross-modal knowledge distillation is a major topic of speech recognition research.
We propose an acoustic model structure with multiple auxiliary output layers for cross-modal distillation.
We extend the proposed method to a hierarchical distillation method using LMs trained in different units.
arXiv Detail & Related papers (2021-10-20T08:42:10Z) - An Approach to Improve Robustness of NLP Systems against ASR Errors [39.57253455717825]
Speech-enabled systems typically first convert audio to text through an automatic speech recognition model and then feed the text to downstream natural language processing modules.
The errors of the ASR system can seriously downgrade the performance of the NLP modules.
Previous work has shown it is effective to employ data augmentation methods to solve this problem by injecting ASR noise during the training process.
arXiv Detail & Related papers (2021-03-25T05:15:43Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.