Complementary Language Model and Parallel Bi-LRNN for False Trigger
Mitigation
- URL: http://arxiv.org/abs/2008.08113v1
- Date: Tue, 18 Aug 2020 18:21:33 GMT
- Title: Complementary Language Model and Parallel Bi-LRNN for False Trigger
Mitigation
- Authors: Rishika Agarwal, Xiaochuan Niu, Pranay Dighe, Srikanth Vishnubhotla,
Sameer Badaskar, Devang Naik
- Abstract summary: False trigger mitigation (FTM) is a process to detect the false trigger events and respond appropriately to the user.
We propose a novel solution by introducing a parallel ASR decoding process with a special language model trained from "out-of-domain" data sources.
- Score: 9.960986677222358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: False triggers in voice assistants are unintended invocations of the
assistant, which not only degrade the user experience but may also compromise
privacy. False trigger mitigation (FTM) is a process to detect the false
trigger events and respond appropriately to the user. In this paper, we propose
a novel solution to the FTM problem by introducing a parallel ASR decoding
process with a special language model trained from "out-of-domain" data
sources. Such language model is complementary to the existing language model
optimized for the assistant task. A bidirectional lattice RNN (Bi-LRNN)
classifier trained from the lattices generated by the complementary language
model shows a $38.34\%$ relative reduction of the false trigger (FT) rate at
the fixed rate of $0.4\%$ false suppression (FS) of correct invocations,
compared to the current Bi-LRNN model. In addition, we propose to train a
parallel Bi-LRNN model based on the decoding lattices from both language
models, and examine various ways of implementation. The resulting model leads
to further reduction in the false trigger rate by $10.8\%$.
Related papers
- SUTA-LM: Bridging Test-Time Adaptation and Language Model Rescoring for Robust ASR [58.31068047426522]
Test-Time Adaptation (TTA) aims to mitigate by adjusting models during inference.<n>Recent work explores combining TTA with external language models, using techniques like beam search rescoring or generative error correction.<n>We propose SUTA-LM, a simple yet effective extension of SUTA, with language model rescoring.<n> Experiments on 18 diverse ASR datasets show that SUTA-LM achieves robust results across a wide range of domains.
arXiv Detail & Related papers (2025-06-10T02:50:20Z) - Unlocking the Potential of Model Merging for Low-Resource Languages [66.7716891808697]
Adapting large language models to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT)
We propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training.
Experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data.
arXiv Detail & Related papers (2024-07-04T15:14:17Z) - Generative error correction for code-switching speech recognition using
large language models [49.06203730433107]
Code-switching (CS) speech refers to the phenomenon of mixing two or more languages within the same sentence.
We propose to leverage large language models (LLMs) and lists of hypotheses generated by an ASR to address the CS problem.
arXiv Detail & Related papers (2023-10-17T14:49:48Z) - HyPoradise: An Open Baseline for Generative Speech Recognition with
Large Language Models [81.56455625624041]
We introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction.
The proposed benchmark contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs of N-best hypotheses.
LLMs with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list.
arXiv Detail & Related papers (2023-09-27T14:44:10Z) - Multi-blank Transducers for Speech Recognition [49.6154259349501]
In our proposed method, we introduce additional blank symbols, which consume two or more input frames when emitted.
We refer to the added symbols as big blanks, and the method multi-blank RNN-T.
With experiments on multiple languages and datasets, we show that multi-blank RNN-T methods could bring relative speedups of over +90%/+139%.
arXiv Detail & Related papers (2022-11-04T16:24:46Z) - Thutmose Tagger: Single-pass neural model for Inverse Text Normalization [76.87664008338317]
Inverse text normalization (ITN) is an essential post-processing step in automatic speech recognition.
We present a dataset preparation method based on the granular alignment of ITN examples.
One-to-one correspondence between tags and input words improves the interpretability of the model's predictions.
arXiv Detail & Related papers (2022-07-29T20:39:02Z) - Adaptive Discounting of Implicit Language Models in RNN-Transducers [33.63456351411599]
We show how a lightweight adaptive LM discounting technique can be used with any RNN-T architecture.
We obtain up to 4% and 14% relative reductions in overall WER and rare word PER, respectively, on a conversational, code-mixed Hindi-English ASR task.
arXiv Detail & Related papers (2022-02-21T08:44:56Z) - On Minimum Word Error Rate Training of the Hybrid Autoregressive
Transducer [40.63693071222628]
We study the minimum word error rate (MWER) training of Hybrid Autoregressive Transducer (HAT)
From experiments with around 30,000 hours of training data, we show that MWER training can improve the accuracy of HAT models.
arXiv Detail & Related papers (2020-10-23T21:16:30Z) - Joint Contextual Modeling for ASR Correction and Language Understanding [60.230013453699975]
We propose multi-task neural approaches to perform contextual language correction on ASR outputs jointly with language understanding (LU)
We show that the error rates of off the shelf ASR and following LU systems can be reduced significantly by 14% relative with joint models trained using small amounts of in-domain data.
arXiv Detail & Related papers (2020-01-28T22:09:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.