Improving Children's Speech Recognition by Fine-tuning Self-supervised
Adult Speech Representations
- URL: http://arxiv.org/abs/2211.07769v1
- Date: Mon, 14 Nov 2022 22:03:36 GMT
- Title: Improving Children's Speech Recognition by Fine-tuning Self-supervised
Adult Speech Representations
- Authors: Renee Lu, Mostafa Shahin, Beena Ahmed
- Abstract summary: Children's speech recognition is a vital, yet largely overlooked domain when building inclusive speech technologies.
Recent advances in self-supervised learning have created a new opportunity for overcoming this problem of data scarcity.
We leverage self-supervised adult speech representations and use three well-known child speech corpora to build models for children's speech recognition.
- Score: 2.2191297646252646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Children's speech recognition is a vital, yet largely overlooked domain when
building inclusive speech technologies. The major challenge impeding progress
in this domain is the lack of adequate child speech corpora; however, recent
advances in self-supervised learning have created a new opportunity for
overcoming this problem of data scarcity. In this paper, we leverage
self-supervised adult speech representations and use three well-known child
speech corpora to build models for children's speech recognition. We assess the
performance of fine-tuning on both native and non-native children's speech,
examine the effect of cross-domain child corpora, and investigate the minimum
amount of child speech required to fine-tune a model which outperforms a
state-of-the-art adult model. We also analyze speech recognition performance
across children's ages. Our results demonstrate that fine-tuning with
cross-domain child corpora leads to relative improvements of up to 46.08% and
45.53% for native and non-native child speech respectively, and absolute
improvements of 14.70% and 31.10%. We also show that with as little as 5 hours
of transcribed children's speech, it is possible to fine-tune a children's
speech recognition system that outperforms a state-of-the-art adult model
fine-tuned on 960 hours of adult speech.
Related papers
- Evaluation of state-of-the-art ASR Models in Child-Adult Interactions [27.30130353688078]
Speech foundation models show a noticeable performance drop (15-20% absolute WER) for child speech compared to adult speech in the conversational setting.
We employ LoRA on the best performing zero shot model (whisper-large) to probe the effectiveness of fine-tuning in a low resource setting.
arXiv Detail & Related papers (2024-09-24T14:42:37Z) - Towards Unsupervised Speech Recognition Without Pronunciation Models [57.222729245842054]
Most languages lack sufficient paired speech and text data to effectively train automatic speech recognition systems.
We propose the removal of reliance on a phoneme lexicon to develop unsupervised ASR systems.
We experimentally demonstrate that an unsupervised speech recognizer can emerge from joint speech-to-speech and text-to-text masked token-infilling.
arXiv Detail & Related papers (2024-06-12T16:30:58Z) - Exploring Speech Foundation Models for Speaker Diarization in Child-Adult Dyadic Interactions [28.5211771482547]
We show that exemplary speech foundation models can achieve 39.5% and 62.3% relative reductions in Diarization Error Rate and Speaker Confusion Rate.
Our results highlight promising pathways for understanding and adopting speech foundation models to facilitate child speech understanding.
arXiv Detail & Related papers (2024-06-12T05:41:01Z) - Child Speech Recognition in Human-Robot Interaction: Problem Solved? [0.024739484546803334]
We revisit a study on child speech recognition from 2017 and show that indeed performance has increased.
Newcomer OpenAI Whisper doing markedly better than leading commercial cloud services.
While transcription is not perfect yet, the best model recognises 60.3% of sentences correctly barring small grammatical differences.
arXiv Detail & Related papers (2024-04-26T13:14:28Z) - Transfer Learning for Robust Low-Resource Children's Speech ASR with
Transformers and Source-Filter Warping [11.584388304271029]
We propose a data augmentation technique based on the source-filter model of speech to close the domain gap between adult and children's speech.
Using this augmentation strategy, we apply transfer learning on a Transformer model pre-trained on adult data.
This model follows the recently introduced XLS-R architecture, a wav2vec 2.0 model pre-trained on several cross-lingual adult speech corpora.
arXiv Detail & Related papers (2022-06-19T12:57:47Z) - Self-Supervised Speech Representation Learning: A Review [105.1545308184483]
Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.
Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods.
This review presents approaches for self-supervised speech representation learning and their connection to other research areas.
arXiv Detail & Related papers (2022-05-21T16:52:57Z) - Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement
by Re-Synthesis [67.73554826428762]
We propose a novel audio-visual speech enhancement framework for high-fidelity telecommunications in AR/VR.
Our approach leverages audio-visual speech cues to generate the codes of a neural speech, enabling efficient synthesis of clean, realistic speech from noisy signals.
arXiv Detail & Related papers (2022-03-31T17:57:10Z) - Speaker Identity Preservation in Dysarthric Speech Reconstruction by
Adversarial Speaker Adaptation [59.41186714127256]
Dysarthric speech reconstruction (DSR) aims to improve the quality of dysarthric speech.
Speaker encoder (SE) optimized for speaker verification has been explored to control the speaker identity.
We propose a novel multi-task learning strategy, i.e., adversarial speaker adaptation (ASA)
arXiv Detail & Related papers (2022-02-18T08:59:36Z) - Recent Progress in the CUHK Dysarthric Speech Recognition System [66.69024814159447]
Disordered speech presents a wide spectrum of challenges to current data intensive deep neural networks (DNNs) based automatic speech recognition technologies.
This paper presents recent research efforts at the Chinese University of Hong Kong to improve the performance of disordered speech recognition systems.
arXiv Detail & Related papers (2022-01-15T13:02:40Z) - High Fidelity Speech Regeneration with Application to Speech Enhancement [96.34618212590301]
We propose a wav-to-wav generative model for speech that can generate 24khz speech in a real-time manner.
Inspired by voice conversion methods, we train to augment the speech characteristics while preserving the identity of the source.
arXiv Detail & Related papers (2021-01-31T10:54:27Z) - Learning to Understand Child-directed and Adult-directed Speech [18.29692441616062]
Human language acquisition research indicates that child-directed speech helps language learners.
We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS)
We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better.
arXiv Detail & Related papers (2020-05-06T10:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.