Phoneme Recognition through Fine Tuning of Phonetic Representations: a
Case Study on Luhya Language Varieties
- URL: http://arxiv.org/abs/2104.01624v1
- Date: Sun, 4 Apr 2021 15:07:55 GMT
- Title: Phoneme Recognition through Fine Tuning of Phonetic Representations: a
Case Study on Luhya Language Varieties
- Authors: Kathleen Siminyu, Xinjian Li, Antonios Anastasopoulos, David
Mortensen, Michael R. Marlo, Graham Neubig
- Abstract summary: We focus on phoneme recognition using Allosaurus, a method for multilingual recognition based on phonetic annotation.
To evaluate in a challenging real-world scenario, we curate phone recognition datasets for Bukusu and Saamia, two varieties of the Luhya language cluster of western Kenya and eastern Uganda.
We find that fine-tuning of Allosaurus, even with just 100 utterances, leads to significant improvements in phone error rates.
- Score: 77.2347265289855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Models pre-trained on multiple languages have shown significant promise for
improving speech recognition, particularly for low-resource languages. In this
work, we focus on phoneme recognition using Allosaurus, a method for
multilingual recognition based on phonetic annotation, which incorporates
phonological knowledge through a language-dependent allophone layer that
associates a universal narrow phone-set with the phonemes that appear in each
language. To evaluate in a challenging real-world scenario, we curate phone
recognition datasets for Bukusu and Saamia, two varieties of the Luhya language
cluster of western Kenya and eastern Uganda. To our knowledge, these datasets
are the first of their kind. We carry out similar experiments on the dataset of
an endangered Tangkhulic language, East Tusom, a Tibeto-Burman language variety
spoken mostly in India. We explore both zero-shot and few-shot recognition by
fine-tuning using datasets of varying sizes (10 to 1000 utterances). We find
that fine-tuning of Allosaurus, even with just 100 utterances, leads to
significant improvements in phone error rates.
Related papers
- Discovering Phonetic Inventories with Crosslingual Automatic Speech
Recognition [71.49308685090324]
This paper investigates the influence of different factors (i.e., model architecture, phonotactic model, type of speech representation) on phone recognition in an unknown language.
We find that unique sounds, similar sounds, and tone languages remain a major challenge for phonetic inventory discovery.
arXiv Detail & Related papers (2022-01-26T22:12:55Z) - Differentiable Allophone Graphs for Language-Universal Speech
Recognition [77.2981317283029]
Building language-universal speech recognition systems entails producing phonological units of spoken sound that can be shared across languages.
We present a general framework to derive phone-level supervision from only phonemic transcriptions and phone-to-phoneme mappings.
We build a universal phone-based speech recognition model with interpretable probabilistic phone-to-phoneme mappings for each language.
arXiv Detail & Related papers (2021-07-24T15:09:32Z) - Multilingual and crosslingual speech recognition using
phonological-vector based phone embeddings [20.93287944284448]
We propose to join phonology driven phone embedding (top-down) and deep neural network (DNN) based acoustic feature extraction (bottom-up) to calculate phone probabilities.
No inversion from acoustics to phonological features is required for speech recognition.
Experiments are conducted on the CommonVoice dataset (German, French, Spanish and Italian) and the AISHLL-1 dataset (Mandarin)
arXiv Detail & Related papers (2021-07-11T12:56:47Z) - Tusom2021: A Phonetically Transcribed Speech Dataset from an Endangered
Language for Universal Phone Recognition Experiments [7.286387368812729]
This paper presents a publicly available, phonetically transcribed corpus of 2255 utterances in the endangered Tangkhulic language East Tusom.
Because the dataset is transcribed in terms of phones, rather than phonemes, it is a better match for universal phone recognition systems than many larger datasets.
arXiv Detail & Related papers (2021-04-02T00:26:10Z) - That Sounds Familiar: an Analysis of Phonetic Representations Transfer
Across Languages [72.9927937955371]
We use the resources existing in other languages to train a multilingual automatic speech recognition model.
We observe significant improvements across all languages in the multilingual setting, and stark degradation in the crosslingual setting.
Our analysis uncovered that even the phones that are unique to a single language can benefit greatly from adding training data from other languages.
arXiv Detail & Related papers (2020-05-16T22:28:09Z) - AlloVera: A Multilingual Allophone Database [137.3686036294502]
AlloVera provides mappings from 218 allophones to phonemes for 14 languages.
We show that a "universal" allophone model, Allosaurus, built with AlloVera, outperforms "universal" phonemic models and language-specific models on a speech-transcription task.
arXiv Detail & Related papers (2020-04-17T02:02:18Z) - Universal Phone Recognition with a Multilingual Allophone System [135.2254086165086]
We propose a joint model of language-independent phone and language-dependent phoneme distributions.
In multilingual ASR experiments over 11 languages, we find that this model improves testing performance by 2% phoneme error rate absolute.
Our recognizer achieves phone accuracy improvements of more than 17%, moving a step closer to speech recognition for all languages in the world.
arXiv Detail & Related papers (2020-02-26T21:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.