Low-resource Accent Classification in Geographically-proximate Settings:
A Forensic and Sociophonetics Perspective
- URL: http://arxiv.org/abs/2206.12759v2
- Date: Wed, 29 Jun 2022 03:11:05 GMT
- Title: Low-resource Accent Classification in Geographically-proximate Settings:
A Forensic and Sociophonetics Perspective
- Authors: Qingcheng Zeng, Dading Chong, Peilin Zhou, Jie Yang
- Abstract summary: Accented speech recognition and accent classification are relatively under-explored research areas in speech technology.
Recent deep learning-based methods and Transformer-based pretrained models have achieved superb performances in both areas.
In this paper, we explored three main accent modelling methods combined with two different classifiers based on 105 speaker recordings retrieved from five urban varieties in Northern England.
- Score: 8.002498051045228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accented speech recognition and accent classification are relatively
under-explored research areas in speech technology. Recently, deep
learning-based methods and Transformer-based pretrained models have achieved
superb performances in both areas. However, most accent classification tasks
focused on classifying different kinds of English accents and little attention
was paid to geographically-proximate accent classification, especially under a
low-resource setting where forensic speech science tasks usually encounter. In
this paper, we explored three main accent modelling methods combined with two
different classifiers based on 105 speaker recordings retrieved from five urban
varieties in Northern England. Although speech representations generated from
pretrained models generally have better performances in downstream
classification, traditional methods like Mel Frequency Cepstral Coefficients
(MFCCs) and formant measurements are equipped with specific strengths. These
results suggest that in forensic phonetics scenario where data are relatively
scarce, a simple modelling method and classifier could be competitive with
state-of-the-art pretrained speech models as feature extractors, which could
enhance a sooner estimation for the accent information in practices. Besides,
our findings also cross-validated a new methodology in quantifying
sociophonetic changes.
Related papers
- Knowledge Distillation for Real-Time Classification of Early Media in Voice Communications [0.13124513975412253]
We propose a novel approach for low-resource requirements based on gradient-boosted trees.
We show that leveraging knowledge distillation and class aggregation techniques to train a simpler and smaller model accelerates the classification of early media in voice calls.
arXiv Detail & Related papers (2024-10-28T19:32:17Z) - GE2E-AC: Generalized End-to-End Loss Training for Accent Classification [13.266765406714942]
We propose a GE2E-AC, in which we train a model to extract accent embedding or AE of an input utterance.
We experimentally show the effectiveness of the proposed GE2E-AC, compared to the baseline model trained with the conventional cross-entropy-based loss.
arXiv Detail & Related papers (2024-07-19T04:44:16Z) - Self-supervised models of audio effectively explain human cortical
responses to speech [71.57870452667369]
We capitalize on the progress of self-supervised speech representation learning to create new state-of-the-art models of the human auditory system.
We show that these results show that self-supervised models effectively capture the hierarchy of information relevant to different stages of speech processing in human cortex.
arXiv Detail & Related papers (2022-05-27T22:04:02Z) - Self-Supervised Speech Representation Learning: A Review [105.1545308184483]
Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.
Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods.
This review presents approaches for self-supervised speech representation learning and their connection to other research areas.
arXiv Detail & Related papers (2022-05-21T16:52:57Z) - A Highly Adaptive Acoustic Model for Accurate Multi-Dialect Speech
Recognition [80.87085897419982]
We propose a novel acoustic modeling technique for accurate multi-dialect speech recognition with a single AM.
Our proposed AM is dynamically adapted based on both dialect information and its internal representation, which results in a highly adaptive AM for handling multiple dialects simultaneously.
The experimental results on large scale speech datasets show that the proposed AM outperforms all the previous ones, reducing word error rates (WERs) by 8.11% relative compared to a single all-dialects AM and by 7.31% relative compared to dialect-specific AMs.
arXiv Detail & Related papers (2022-05-06T06:07:09Z) - Quantifying Language Variation Acoustically with Few Resources [4.162663632560141]
Deep acoustic models might have learned linguistic information that transfers to low-resource languages.
We compute pairwise pronunciation differences averaged over 10 words for over 100 individual dialects from four (regional) languages.
Our results show that acoustic models outperform the (traditional) transcription-based approach without requiring phonetic transcriptions.
arXiv Detail & Related papers (2022-05-05T15:00:56Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - MetaAudio: A Few-Shot Audio Classification Benchmark [2.294014185517203]
This work aims to alleviate this reliance on image-based benchmarks by offering the first comprehensive, public and fully reproducible audio based alternative.
We compare the few-shot classification performance of a variety of techniques on seven audio datasets.
Our experimentation shows gradient-based meta-learning methods such as MAML and Meta-Curvature consistently outperform both metric and baseline methods.
arXiv Detail & Related papers (2022-04-05T11:33:44Z) - Multi-Modal Pre-Training for Automated Speech Recognition [11.451227239633553]
We introduce a self-supervised learning technique based on masked language modeling to compute a global, multi-modal encoding of the environment in which the utterance occurs.
The resulting method can outperform baseline methods by up to 7% on Librispeech.
arXiv Detail & Related papers (2021-10-12T17:07:25Z) - A Review of Sound Source Localization with Deep Learning Methods [71.18444724397486]
This article is a review on deep learning methods for single and multiple sound source localization.
We provide an exhaustive topography of the neural-based localization literature in this context.
Tables summarizing the literature review are provided at the end of the review for a quick search of methods with a given set of target characteristics.
arXiv Detail & Related papers (2021-09-08T07:25:39Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.