CTC-synchronous Training for Monotonic Attention Model
- URL: http://arxiv.org/abs/2005.04712v3
- Date: Thu, 6 Aug 2020 10:07:04 GMT
- Title: CTC-synchronous Training for Monotonic Attention Model
- Authors: Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara
- Abstract summary: backward probabilities cannot be leveraged in the alignment process during training due to left-to-right dependency in the decoder.
We propose CTC-synchronous training ( CTC-ST), in which MoChA uses CTC alignments to learn optimal monotonic alignments.
The entire model is jointly optimized so that the expected boundaries from MoChA are synchronized with the alignments.
- Score: 43.0382262234792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monotonic chunkwise attention (MoChA) has been studied for the online
streaming automatic speech recognition (ASR) based on a sequence-to-sequence
framework. In contrast to connectionist temporal classification (CTC), backward
probabilities cannot be leveraged in the alignment marginalization process
during training due to left-to-right dependency in the decoder. This results in
the error propagation of alignments to subsequent token generation. To address
this problem, we propose CTC-synchronous training (CTC-ST), in which MoChA uses
CTC alignments to learn optimal monotonic alignments. Reference CTC alignments
are extracted from a CTC branch sharing the same encoder with the decoder. The
entire model is jointly optimized so that the expected boundaries from MoChA
are synchronized with the alignments. Experimental evaluations of the TEDLIUM
release-2 and Librispeech corpora show that the proposed method significantly
improves recognition, especially for long utterances. We also show that CTC-ST
can bring out the full potential of SpecAugment for MoChA.
Related papers
- CR-CTC: Consistency regularization on CTC for improved speech recognition [18.996929774821822]
Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR)
However, it often falls short in recognition performance compared to transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED)
We propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram.
arXiv Detail & Related papers (2024-10-07T14:56:07Z) - Fast Context-Biasing for CTC and Transducer ASR models with CTC-based Word Spotter [57.64003871384959]
This work presents a new approach to fast context-biasing with CTC-based Word Spotter.
The proposed method matches CTC log-probabilities against a compact context graph to detect potential context-biasing candidates.
The results demonstrate a significant acceleration of the context-biasing recognition with a simultaneous improvement in F-score and WER.
arXiv Detail & Related papers (2024-06-11T09:37:52Z) - Self-distillation Regularized Connectionist Temporal Classification Loss
for Text Recognition: A Simple Yet Effective Approach [14.69981874614434]
We show how to better optimize a text recognition model from the perspective of loss functions.
CTC-based methods, widely used in practice due to their good balance between performance and inference speed, still grapple with degradation accuracy.
We propose a self-distillation scheme for CTC-based model to address this issue.
arXiv Detail & Related papers (2023-08-17T06:32:57Z) - Bayes risk CTC: Controllable CTC alignment in Sequence-to-Sequence tasks [63.189632935619535]
Bayes risk CTC (BRCTC) is proposed to enforce the desired characteristics of the predicted alignment.
By using BRCTC with another preference for early emissions, we obtain an improved performance-latency trade-off for online models.
arXiv Detail & Related papers (2022-10-14T03:55:36Z) - CTC Alignments Improve Autoregressive Translation [145.90587287444976]
We argue that CTC does in fact make sense for translation if applied in a joint CTC/attention framework.
Our proposed joint CTC/attention models outperform pure-attention baselines across six benchmark translation tasks.
arXiv Detail & Related papers (2022-10-11T07:13:50Z) - Alignment Knowledge Distillation for Online Streaming Attention-based
Speech Recognition [46.69852287267763]
This article describes an efficient training method for online streaming attention-based encoder-decoder (AED) automatic speech recognition (ASR) systems.
The proposed method significantly reduces recognition errors and emission latency simultaneously.
The best MoChA system shows performance comparable to that of RNN-transducer (RNN-T)
arXiv Detail & Related papers (2021-02-28T08:17:38Z) - Intermediate Loss Regularization for CTC-based Speech Recognition [58.33721897180646]
We present a simple and efficient auxiliary loss function for automatic speech recognition (ASR) based on the connectionist temporal classification ( CTC) objective.
We evaluate the proposed method on various corpora, reaching word error rate (WER) 9.9% on the WSJ corpus and character error rate (CER) 5.2% on the AISHELL-1 corpus respectively.
arXiv Detail & Related papers (2021-02-05T15:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.