DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation
Detection and Correction
- URL: http://arxiv.org/abs/2303.00171v1
- Date: Wed, 1 Mar 2023 01:53:11 GMT
- Title: DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation
Detection and Correction
- Authors: Raviteja Anantha, Kriti Bhasin, Daniela de la Parra Aguilar, Prabal
Vashisht, Becci Williamson, Srinivas Chappidi
- Abstract summary: We present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction.
We also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss.
Human evaluation shows our proposed approach improves pronunciation accuracy on average by 6% compared to strong phoneme-based and audio-based baselines.
- Score: 1.8322859214908722
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Personal Digital Assistants (PDAs) - such as Siri, Alexa and Google
Assistant, to name a few - play an increasingly important role to access
information and complete tasks spanning multiple domains, and by diverse groups
of users. A text-to-speech (TTS) module allows PDAs to interact in a natural,
human-like manner, and play a vital role when the interaction involves people
with visual impairments or other disabilities. To cater to the needs of a
diverse set of users, inclusive TTS is important to recognize and pronounce
correctly text in different languages and dialects. Despite great progress in
speech synthesis, the pronunciation accuracy of named entities in a
multi-lingual setting still has a large room for improvement. Existing
approaches to correct named entity (NE) mispronunciations, like retraining
Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation
dictionary, require expensive annotation of the ground truth pronunciation,
which is also time consuming. In this work, we present a highly-precise,
PDA-compatible pronunciation learning framework for the task of TTS
mispronunciation detection and correction. In addition, we also propose a novel
mispronunciation detection model called DTW-SiameseNet, which employs metric
learning with a Siamese architecture for Dynamic Time Warping (DTW) with
triplet loss. We demonstrate that a locale-agnostic, privacy-preserving
solution to the problem of TTS mispronunciation detection is feasible. We
evaluate our approach on a real-world dataset, and a corpus of NE
pronunciations of an anonymized audio dataset of person names recorded by
participants from 10 different locales. Human evaluation shows our proposed
approach improves pronunciation accuracy on average by ~6% compared to strong
phoneme-based and audio-based baselines.
Related papers
- An Initial Investigation of Language Adaptation for TTS Systems under Low-resource Scenarios [76.11409260727459]
This paper explores the language adaptation capability of ZMM-TTS, a recent SSL-based multilingual TTS system.
We demonstrate that the similarity in phonetics between the pre-training and target languages, as well as the language category, affects the target language's adaptation performance.
arXiv Detail & Related papers (2024-06-13T08:16:52Z) - Learning Speech Representation From Contrastive Token-Acoustic
Pretraining [57.08426714676043]
We propose "Contrastive Token-Acoustic Pretraining (CTAP)", which uses two encoders to bring phoneme and speech into a joint multimodal space.
The proposed CTAP model is trained on 210k speech and phoneme pairs, achieving minimally-supervised TTS, VC, and ASR.
arXiv Detail & Related papers (2023-09-01T12:35:43Z) - Multilingual context-based pronunciation learning for Text-to-Speech [13.941800219395757]
Phonetic information and linguistic knowledge are an essential component of a Text-to-speech (TTS) front-end.
We showcase a multilingual unified front-end system that addresses any pronunciation related task, typically handled by separate modules.
We find that the multilingual model is competitive across languages and tasks, however, some trade-offs exists when compared to equivalent monolingual solutions.
arXiv Detail & Related papers (2023-07-31T14:29:06Z) - IPA-CLIP: Integrating Phonetic Priors into Vision and Language
Pretraining [8.129944388402839]
This paper inserts phonetic prior into Contrastive Language-Image Pretraining (CLIP)
IPA-CLIP comprises this pronunciation encoder and the original CLIP encoders (image and text)
arXiv Detail & Related papers (2023-03-06T13:59:37Z) - Computer-assisted Pronunciation Training -- Speech synthesis is almost
all you need [18.446969150062586]
Existing CAPT methods are not able to detect pronunciation errors with high accuracy.
We present three innovative techniques based on phoneme-to-phoneme (P2P), text-to-speech (T2S), and speech-to-speech (S2S) conversion.
We show that these techniques not only improve the accuracy of three machine learning models for detecting pronunciation errors but also help establish a new state-of-the-art in the field.
arXiv Detail & Related papers (2022-07-02T08:33:33Z) - Few-Shot Cross-Lingual TTS Using Transferable Phoneme Embedding [55.989376102986654]
This paper studies a transferable phoneme embedding framework that aims to deal with the cross-lingual text-to-speech problem under the few-shot setting.
We propose a framework that consists of a phoneme-based TTS model and a codebook module to project phonemes from different languages into a learned latent space.
arXiv Detail & Related papers (2022-06-27T11:24:40Z) - Dict-TTS: Learning to Pronounce with Prior Dictionary Knowledge for
Text-to-Speech [88.22544315633687]
Polyphone disambiguation aims to capture accurate pronunciation knowledge from natural text sequences for reliable Text-to-speech systems.
We propose Dict-TTS, a semantic-aware generative text-to-speech model with an online website dictionary.
Experimental results in three languages show that our model outperforms several strong baseline models in terms of pronunciation accuracy.
arXiv Detail & Related papers (2022-06-05T10:50:34Z) - Towards Language Modelling in the Speech Domain Using Sub-word
Linguistic Units [56.52704348773307]
We propose a novel LSTM-based generative speech LM based on linguistic units including syllables and phonemes.
With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech.
We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features.
arXiv Detail & Related papers (2021-10-31T22:48:30Z) - Dynamic Acoustic Unit Augmentation With BPE-Dropout for Low-Resource
End-to-End Speech Recognition [62.94773371761236]
We consider building an effective end-to-end ASR system in low-resource setups with a high OOV rate.
We propose a method of dynamic acoustic unit augmentation based on the BPE-dropout technique.
Our monolingual Turkish Conformer established a competitive result with 22.2% character error rate (CER) and 38.9% word error rate (WER)
arXiv Detail & Related papers (2021-03-12T10:10:13Z) - Class LM and word mapping for contextual biasing in End-to-End ASR [4.989480853499918]
In recent years, all-neural, end-to-end (E2E) ASR systems gained rapid interest in the speech recognition community.
In this paper, we propose an algorithm to train a context aware E2E model and allow the beam search to traverse into the context FST during inference.
Although an E2E model does not need pronunciation dictionary, it's interesting to make use of existing pronunciation knowledge to improve accuracy.
arXiv Detail & Related papers (2020-07-10T20:58:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.