AccentDB: A Database of Non-Native English Accents to Assist Neural
Speech Recognition
- URL: http://arxiv.org/abs/2005.07973v1
- Date: Sat, 16 May 2020 12:38:30 GMT
- Title: AccentDB: A Database of Non-Native English Accents to Assist Neural
Speech Recognition
- Authors: Afroz Ahamad, Ankit Anand, Pranesh Bhargava
- Abstract summary: We first spell out the key requirements for creating a well-curated database of speech samples in non-native accents for training and testing robust ASR systems.
We then introduce AccentDB, one such database that contains samples of 4 Indian-English accents collected by us.
We present several accent classification models and evaluate them thoroughly against human-labelled accent classes.
- Score: 3.028098724882708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern Automatic Speech Recognition (ASR) technology has evolved to identify
the speech spoken by native speakers of a language very well. However,
identification of the speech spoken by non-native speakers continues to be a
major challenge for it. In this work, we first spell out the key requirements
for creating a well-curated database of speech samples in non-native accents
for training and testing robust ASR systems. We then introduce AccentDB, one
such database that contains samples of 4 Indian-English accents collected by
us, and a compilation of samples from 4 native-English, and a metropolitan
Indian-English accent. We also present an analysis on separability of the
collected accent data. Further, we present several accent classification models
and evaluate them thoroughly against human-labelled accent classes. We test the
generalization of our classifier models in a variety of setups of seen and
unseen data. Finally, we introduce the task of accent neutralization of
non-native accents to native accents using autoencoder models with
task-specific architectures. Thus, our work aims to aid ASR systems at every
stage of development with a database for training, classification models for
feature augmentation, and neutralization systems for acoustic transformations
of non-native accents of English.
Related papers
- Improving Pronunciation and Accent Conversion through Knowledge Distillation And Synthetic Ground-Truth from Native TTS [52.89324095217975]
Previous approaches on accent conversion mainly aimed at making non-native speech sound more native.
We develop a new AC approach that not only focuses on accent conversion but also improves pronunciation of non-native accented speaker.
arXiv Detail & Related papers (2024-10-19T06:12:31Z) - Accent conversion using discrete units with parallel data synthesized from controllable accented TTS [56.18382038512251]
The goal of accent conversion (AC) is to convert speech accents while preserving content and speaker identity.
Previous methods either required reference utterances during inference, did not preserve speaker identity well, or used one-to-one systems that could only be trained for each non-native accent.
This paper presents a promising AC model that can convert many accents into native to overcome these issues.
arXiv Detail & Related papers (2024-09-30T19:52:10Z) - Improving Self-supervised Pre-training using Accent-Specific Codebooks [48.409296549372414]
accent-aware adaptation technique for self-supervised learning.
On the Mozilla Common Voice dataset, our proposed approach outperforms all other accent-adaptation approaches.
arXiv Detail & Related papers (2024-07-04T08:33:52Z) - Transfer the linguistic representations from TTS to accent conversion
with non-parallel data [7.376032484438044]
Accent conversion aims to convert the accent of a source speech to a target accent, preserving the speaker's identity.
This paper introduces a novel non-autoregressive framework for accent conversion that learns accent-agnostic linguistic representations and employs them to convert the accent in the source speech.
arXiv Detail & Related papers (2024-01-07T16:39:34Z) - Accented Speech Recognition With Accent-specific Codebooks [53.288874858671576]
Speech accents pose a significant challenge to state-of-the-art automatic speech recognition (ASR) systems.
Degradation in performance across underrepresented accents is a severe deterrent to the inclusive adoption of ASR.
We propose a novel accent adaptation approach for end-to-end ASR systems using cross-attention with a trainable set of codebooks.
arXiv Detail & Related papers (2023-10-24T16:10:58Z) - CommonAccent: Exploring Large Acoustic Pretrained Models for Accent
Classification Based on Common Voice [1.559929646151698]
We introduce a recipe aligned to the SpeechBrain toolkit for accent classification based on Common Voice 7.0 (English) and Common Voice 11.0 (Italian, German, and Spanish)
We establish new state-of-the-art for English accent classification with as high as 95% accuracy.
arXiv Detail & Related papers (2023-05-29T17:53:35Z) - Synthetic Cross-accent Data Augmentation for Automatic Speech
Recognition [18.154258453839066]
We improve an accent-conversion model (ACM) which transforms native US-English speech into accented pronunciation.
We include phonetic knowledge in the ACM training to provide accurate feedback about how well certain pronunciation patterns were recovered in the synthesized waveform.
We evaluate our approach on native and non-native English datasets and found that synthetically accented data helped the ASR to better understand speech from seen accents.
arXiv Detail & Related papers (2023-03-01T20:05:19Z) - ASR data augmentation in low-resource settings using cross-lingual
multi-speaker TTS and cross-lingual voice conversion [49.617722668505834]
We show that our approach permits the application of speech synthesis and voice conversion to improve ASR systems using only one target-language speaker during model training.
It is possible to obtain promising ASR training results with our data augmentation method using only a single real speaker in a target language.
arXiv Detail & Related papers (2022-03-29T11:55:30Z) - Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages [51.0542215642794]
We propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification.
We present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks.
arXiv Detail & Related papers (2020-11-07T00:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.