Spectro-Temporal Deep Features for Disordered Speech Assessment and
Recognition
- URL: http://arxiv.org/abs/2201.05554v1
- Date: Fri, 14 Jan 2022 16:56:43 GMT
- Title: Spectro-Temporal Deep Features for Disordered Speech Assessment and
Recognition
- Authors: Mengzhe Geng, Shansong Liu, Jianwei Yu, Xurong Xie, Shoukang Hu, Zi
Ye, Zengrui Jin, Xunying Liu, Helen Meng
- Abstract summary: Motivated by the spectro-temporal level differences between disordered and normal speech that systematically manifest in articulatory imprecision, decreased volume and clarity, slower speaking rates and increased dysfluencies, novel spectro-temporal subspace basis embedding deep features derived by SVD decomposition of speech spectrum are proposed.
Experiments conducted on the UASpeech corpus suggest the proposed spectro-temporal deep feature adapted systems consistently outperformed baseline i- adaptation by up to 263% absolute (8.6% relative) reduction in word error rate (WER) with or without data augmentation.
- Score: 65.25325641528701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic recognition of disordered speech remains a highly challenging task
to date. Sources of variability commonly found in normal speech including
accent, age or gender, when further compounded with the underlying causes of
speech impairment and varying severity levels, create large diversity among
speakers. To this end, speaker adaptation techniques play a vital role in
current speech recognition systems. Motivated by the spectro-temporal level
differences between disordered and normal speech that systematically manifest
in articulatory imprecision, decreased volume and clarity, slower speaking
rates and increased dysfluencies, novel spectro-temporal subspace basis
embedding deep features derived by SVD decomposition of speech spectrum are
proposed to facilitate both accurate speech intelligibility assessment and
auxiliary feature based speaker adaptation of state-of-the-art hybrid DNN and
end-to-end disordered speech recognition systems. Experiments conducted on the
UASpeech corpus suggest the proposed spectro-temporal deep feature adapted
systems consistently outperformed baseline i-Vector adaptation by up to 2.63%
absolute (8.6% relative) reduction in word error rate (WER) with or without
data augmentation. Learning hidden unit contribution (LHUC) based speaker
adaptation was further applied. The final speaker adapted system using the
proposed spectral basis embedding features gave an overall WER of 25.6% on the
UASpeech test set of 16 dysarthric speakers
Related papers
- Homogeneous Speaker Features for On-the-Fly Dysarthric and Elderly Speaker Adaptation [71.31331402404662]
This paper proposes two novel data-efficient methods to learn dysarthric and elderly speaker-level features.
Speaker-regularized spectral basis embedding-SBE features that exploit a special regularization term to enforce homogeneity of speaker features in adaptation.
Feature-based learning hidden unit contributions (f-LHUC) that are conditioned on VR-LH features that are shown to be insensitive to speaker-level data quantity in testtime adaptation.
arXiv Detail & Related papers (2024-07-08T18:20:24Z) - Use of Speech Impairment Severity for Dysarthric Speech Recognition [37.93801885333925]
This paper proposes a novel set of techniques to use both severity and speaker-identity in dysarthric speech recognition.
Experiments conducted on UASpeech suggest incorporating speech impairment severity into state-of-the-art hybrid DNN, E2E Conformer and pre-trained Wav2vec 2.0 ASR systems.
arXiv Detail & Related papers (2023-05-18T02:42:59Z) - On-the-Fly Feature Based Rapid Speaker Adaptation for Dysarthric and
Elderly Speech Recognition [53.17176024917725]
Scarcity of speaker-level data limits the practical use of data-intensive model based speaker adaptation methods.
This paper proposes two novel forms of data-efficient, feature-based on-the-fly speaker adaptation methods.
arXiv Detail & Related papers (2022-03-28T09:12:24Z) - Speaker Adaptation Using Spectro-Temporal Deep Features for Dysarthric
and Elderly Speech Recognition [48.33873602050463]
Speaker adaptation techniques play a key role in personalization of ASR systems for such users.
Motivated by the spectro-temporal level differences between dysarthric, elderly and normal speech.
Novel spectrotemporal subspace basis deep embedding features derived using SVD speech spectrum.
arXiv Detail & Related papers (2022-02-21T15:11:36Z) - Recent Progress in the CUHK Dysarthric Speech Recognition System [66.69024814159447]
Disordered speech presents a wide spectrum of challenges to current data intensive deep neural networks (DNNs) based automatic speech recognition technologies.
This paper presents recent research efforts at the Chinese University of Hong Kong to improve the performance of disordered speech recognition systems.
arXiv Detail & Related papers (2022-01-15T13:02:40Z) - Investigation of Data Augmentation Techniques for Disordered Speech
Recognition [69.50670302435174]
This paper investigates a set of data augmentation techniques for disordered speech recognition.
Both normal and disordered speech were exploited in the augmentation process.
The final speaker adapted system constructed using the UASpeech corpus and the best augmentation approach based on speed perturbation produced up to 2.92% absolute word error rate (WER)
arXiv Detail & Related papers (2022-01-14T17:09:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.