Multilingual and crosslingual speech recognition using
phonological-vector based phone embeddings
- URL: http://arxiv.org/abs/2107.05038v1
- Date: Sun, 11 Jul 2021 12:56:47 GMT
- Title: Multilingual and crosslingual speech recognition using
phonological-vector based phone embeddings
- Authors: Chengrui Zhu, Keyu An, Huahuan Zheng, Zhijian Ou
- Abstract summary: We propose to join phonology driven phone embedding (top-down) and deep neural network (DNN) based acoustic feature extraction (bottom-up) to calculate phone probabilities.
No inversion from acoustics to phonological features is required for speech recognition.
Experiments are conducted on the CommonVoice dataset (German, French, Spanish and Italian) and the AISHLL-1 dataset (Mandarin)
- Score: 20.93287944284448
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of phonological features (PFs) potentially allows language-specific
phones to remain linked in training, which is highly desirable for information
sharing for multilingual and crosslingual speech recognition methods for
low-resourced languages. A drawback suffered by previous methods in using
phonological features is that the acoustic-to-PF extraction in a bottom-up way
is itself difficult. In this paper, we propose to join phonology driven phone
embedding (top-down) and deep neural network (DNN) based acoustic feature
extraction (bottom-up) to calculate phone probabilities. The new method is
called JoinAP (Joining of Acoustics and Phonology). Remarkably, no inversion
from acoustics to phonological features is required for speech recognition. For
each phone in the IPA (International Phonetic Alphabet) table, we encode its
phonological features to a phonological-vector, and then apply linear or
nonlinear transformation of the phonological-vector to obtain the phone
embedding. A series of multilingual and crosslingual (both zero-shot and
few-shot) speech recognition experiments are conducted on the CommonVoice
dataset (German, French, Spanish and Italian) and the AISHLL-1 dataset
(Mandarin), and demonstrate the superiority of JoinAP with nonlinear phone
embeddings over both JoinAP with linear phone embeddings and the traditional
method with flat phone embeddings.
Related papers
- Discovering Phonetic Inventories with Crosslingual Automatic Speech
Recognition [71.49308685090324]
This paper investigates the influence of different factors (i.e., model architecture, phonotactic model, type of speech representation) on phone recognition in an unknown language.
We find that unique sounds, similar sounds, and tone languages remain a major challenge for phonetic inventory discovery.
arXiv Detail & Related papers (2022-01-26T22:12:55Z) - Differentiable Allophone Graphs for Language-Universal Speech
Recognition [77.2981317283029]
Building language-universal speech recognition systems entails producing phonological units of spoken sound that can be shared across languages.
We present a general framework to derive phone-level supervision from only phonemic transcriptions and phone-to-phoneme mappings.
We build a universal phone-based speech recognition model with interpretable probabilistic phone-to-phoneme mappings for each language.
arXiv Detail & Related papers (2021-07-24T15:09:32Z) - Phoneme Recognition through Fine Tuning of Phonetic Representations: a
Case Study on Luhya Language Varieties [77.2347265289855]
We focus on phoneme recognition using Allosaurus, a method for multilingual recognition based on phonetic annotation.
To evaluate in a challenging real-world scenario, we curate phone recognition datasets for Bukusu and Saamia, two varieties of the Luhya language cluster of western Kenya and eastern Uganda.
We find that fine-tuning of Allosaurus, even with just 100 utterances, leads to significant improvements in phone error rates.
arXiv Detail & Related papers (2021-04-04T15:07:55Z) - Tusom2021: A Phonetically Transcribed Speech Dataset from an Endangered
Language for Universal Phone Recognition Experiments [7.286387368812729]
This paper presents a publicly available, phonetically transcribed corpus of 2255 utterances in the endangered Tangkhulic language East Tusom.
Because the dataset is transcribed in terms of phones, rather than phonemes, it is a better match for universal phone recognition systems than many larger datasets.
arXiv Detail & Related papers (2021-04-02T00:26:10Z) - Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages [51.0542215642794]
We propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification.
We present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks.
arXiv Detail & Related papers (2020-11-07T00:35:31Z) - AlloVera: A Multilingual Allophone Database [137.3686036294502]
AlloVera provides mappings from 218 allophones to phonemes for 14 languages.
We show that a "universal" allophone model, Allosaurus, built with AlloVera, outperforms "universal" phonemic models and language-specific models on a speech-transcription task.
arXiv Detail & Related papers (2020-04-17T02:02:18Z) - Universal Phone Recognition with a Multilingual Allophone System [135.2254086165086]
We propose a joint model of language-independent phone and language-dependent phoneme distributions.
In multilingual ASR experiments over 11 languages, we find that this model improves testing performance by 2% phoneme error rate absolute.
Our recognizer achieves phone accuracy improvements of more than 17%, moving a step closer to speech recognition for all languages in the world.
arXiv Detail & Related papers (2020-02-26T21:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.