Differentiable Allophone Graphs for Language-Universal Speech
Recognition
- URL: http://arxiv.org/abs/2107.11628v1
- Date: Sat, 24 Jul 2021 15:09:32 GMT
- Title: Differentiable Allophone Graphs for Language-Universal Speech
Recognition
- Authors: Brian Yan, Siddharth Dalmia, David R. Mortensen, Florian Metze, Shinji
Watanabe
- Abstract summary: Building language-universal speech recognition systems entails producing phonological units of spoken sound that can be shared across languages.
We present a general framework to derive phone-level supervision from only phonemic transcriptions and phone-to-phoneme mappings.
We build a universal phone-based speech recognition model with interpretable probabilistic phone-to-phoneme mappings for each language.
- Score: 77.2981317283029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building language-universal speech recognition systems entails producing
phonological units of spoken sound that can be shared across languages. While
speech annotations at the language-specific phoneme or surface levels are
readily available, annotations at a universal phone level are relatively rare
and difficult to produce. In this work, we present a general framework to
derive phone-level supervision from only phonemic transcriptions and
phone-to-phoneme mappings with learnable weights represented using weighted
finite-state transducers, which we call differentiable allophone graphs. By
training multilingually, we build a universal phone-based speech recognition
model with interpretable probabilistic phone-to-phoneme mappings for each
language. These phone-based systems with learned allophone graphs can be used
by linguists to document new languages, build phone-based lexicons that capture
rich pronunciation variations, and re-evaluate the allophone mappings of seen
language. We demonstrate the aforementioned benefits of our proposed framework
with a system trained on 7 diverse languages.
Related papers
- Multilingual and crosslingual speech recognition using
phonological-vector based phone embeddings [20.93287944284448]
We propose to join phonology driven phone embedding (top-down) and deep neural network (DNN) based acoustic feature extraction (bottom-up) to calculate phone probabilities.
No inversion from acoustics to phonological features is required for speech recognition.
Experiments are conducted on the CommonVoice dataset (German, French, Spanish and Italian) and the AISHLL-1 dataset (Mandarin)
arXiv Detail & Related papers (2021-07-11T12:56:47Z) - Phoneme Recognition through Fine Tuning of Phonetic Representations: a
Case Study on Luhya Language Varieties [77.2347265289855]
We focus on phoneme recognition using Allosaurus, a method for multilingual recognition based on phonetic annotation.
To evaluate in a challenging real-world scenario, we curate phone recognition datasets for Bukusu and Saamia, two varieties of the Luhya language cluster of western Kenya and eastern Uganda.
We find that fine-tuning of Allosaurus, even with just 100 utterances, leads to significant improvements in phone error rates.
arXiv Detail & Related papers (2021-04-04T15:07:55Z) - Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages [51.0542215642794]
We propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification.
We present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks.
arXiv Detail & Related papers (2020-11-07T00:35:31Z) - That Sounds Familiar: an Analysis of Phonetic Representations Transfer
Across Languages [72.9927937955371]
We use the resources existing in other languages to train a multilingual automatic speech recognition model.
We observe significant improvements across all languages in the multilingual setting, and stark degradation in the crosslingual setting.
Our analysis uncovered that even the phones that are unique to a single language can benefit greatly from adding training data from other languages.
arXiv Detail & Related papers (2020-05-16T22:28:09Z) - AlloVera: A Multilingual Allophone Database [137.3686036294502]
AlloVera provides mappings from 218 allophones to phonemes for 14 languages.
We show that a "universal" allophone model, Allosaurus, built with AlloVera, outperforms "universal" phonemic models and language-specific models on a speech-transcription task.
arXiv Detail & Related papers (2020-04-17T02:02:18Z) - Universal Phone Recognition with a Multilingual Allophone System [135.2254086165086]
We propose a joint model of language-independent phone and language-dependent phoneme distributions.
In multilingual ASR experiments over 11 languages, we find that this model improves testing performance by 2% phoneme error rate absolute.
Our recognizer achieves phone accuracy improvements of more than 17%, moving a step closer to speech recognition for all languages in the world.
arXiv Detail & Related papers (2020-02-26T21:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.