Adversarial synthesis based data-augmentation for code-switched spoken
language identification
- URL: http://arxiv.org/abs/2205.15747v2
- Date: Wed, 1 Jun 2022 18:17:51 GMT
- Title: Adversarial synthesis based data-augmentation for code-switched spoken
language identification
- Authors: Parth Shastri, Chirag Patil, Poorval Wanere, Dr. Shrinivas Mahajan,
Dr. Abhishek Bhatt, Dr. Hardik Sailor
- Abstract summary: Spoken Language Identification (LID) is an important sub-task of Automatic Speech Recognition (ASR)
This study focuses on Indic language code-mixed with English.
Generative Adversarial Network (GAN) based data augmentation technique performed using Mel spectrograms for audio data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Spoken Language Identification (LID) is an important sub-task of Automatic
Speech Recognition(ASR) that is used to classify the language(s) in an audio
segment. Automatic LID plays an useful role in multilingual countries. In
various countries, identifying a language becomes hard, due to the multilingual
scenario where two or more than two languages are mixed together during
conversation. Such phenomenon of speech is called as code-mixing or
code-switching. This nature is followed not only in India but also in many
Asian countries. Such code-mixed data is hard to find, which further reduces
the capabilities of the spoken LID. Hence, this work primarily addresses this
problem using data augmentation as a solution on the on the data scarcity of
the code-switched class. This study focuses on Indic language code-mixed with
English. Spoken LID is performed on Hindi, code-mixed with English. This
research proposes Generative Adversarial Network (GAN) based data augmentation
technique performed using Mel spectrograms for audio data. GANs have already
been proven to be accurate in representing the real data distribution in the
image domain. Proposed research exploits these capabilities of GANs in speech
domains such as speech classification, automatic speech recognition, etc. GANs
are trained to generate Mel spectrograms of the minority code-mixed class which
are then used to augment data for the classifier. Utilizing GANs give an
overall improvement on Unweighted Average Recall by an amount of 3.5% as
compared to a Convolutional Recurrent Neural Network (CRNN) classifier used as
the baseline reference.
Related papers
- Multilingual self-supervised speech representations improve the speech
recognition of low-resource African languages with codeswitching [65.74653592668743]
Finetuning self-supervised multilingual representations reduces absolute word error rates by up to 20%.
In circumstances with limited training data finetuning self-supervised representations is a better performing and viable solution.
arXiv Detail & Related papers (2023-11-25T17:05:21Z) - Speech collage: code-switched audio generation by collaging monolingual
corpora [50.356820349870986]
Speech Collage is a method that synthesizes CS data from monolingual corpora by splicing audio segments.
We investigate the impact of generated data on speech recognition in two scenarios.
arXiv Detail & Related papers (2023-09-27T14:17:53Z) - Code-Switched Urdu ASR for Noisy Telephonic Environment using Data
Centric Approach with Hybrid HMM and CNN-TDNN [0.0]
Urdu is the $10th$ most widely spoken language in the world, with 231,295,440 worldwide still remains a resource constrained language in ASR.
This paper describes an implementation framework of a resource efficient Automatic Speech Recognition/ Speech to Text System in a noisy call-center environment.
arXiv Detail & Related papers (2023-07-24T13:04:21Z) - Language-agnostic Code-Switching in Sequence-To-Sequence Speech
Recognition [62.997667081978825]
Code-Switching (CS) is referred to the phenomenon of alternately using words and phrases from different languages.
We propose a simple yet effective data augmentation in which audio and corresponding labels of different source languages are transcribed.
We show that this augmentation can even improve the model's performance on inter-sentential language switches not seen during training by 5,03% WER.
arXiv Detail & Related papers (2022-10-17T12:15:57Z) - LAE: Language-Aware Encoder for Monolingual and Multilingual ASR [87.74794847245536]
A novel language-aware encoder (LAE) architecture is proposed to handle both situations by disentangling language-specific information.
Experiments conducted on Mandarin-English code-switched speech suggest that the proposed LAE is capable of discriminating different languages in frame-level.
arXiv Detail & Related papers (2022-06-05T04:03:12Z) - Reducing language context confusion for end-to-end code-switching
automatic speech recognition [50.89821865949395]
We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model.
By calculating the respective attention of multiple languages, our method can efficiently transfer language knowledge from rich monolingual data.
arXiv Detail & Related papers (2022-01-28T14:39:29Z) - Transferring Knowledge Distillation for Multilingual Social Event
Detection [42.663309895263666]
Recently published graph neural networks (GNNs) show promising performance at social event detection tasks.
We present a GNN that incorporates cross-lingual word embeddings for detecting events in multilingual data streams.
Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.
arXiv Detail & Related papers (2021-08-06T12:38:42Z) - Exploiting Spectral Augmentation for Code-Switched Spoken Language
Identification [2.064612766965483]
We perform spoken LID on three Indian languages code-mixed with English.
This task was organized by the Microsoft research team as a spoken LID challenge.
arXiv Detail & Related papers (2020-10-14T14:37:03Z) - kk2018 at SemEval-2020 Task 9: Adversarial Training for Code-Mixing
Sentiment Classification [18.41476971318978]
Code switching is a linguistic phenomenon that may occur within a multilingual setting where speakers share more than one language.
In this work, the domain transfer learning from state-of-the-art uni-language model ERNIE is tested on the code-mixing dataset.
adversarial training with a multi-lingual model is used to achieve 1st place of SemEval-2020 Task 9 Hindi-English sentiment classification competition.
arXiv Detail & Related papers (2020-09-08T12:20:04Z) - Rnn-transducer with language bias for end-to-end Mandarin-English
code-switching speech recognition [58.105818353866354]
We propose an improved recurrent neural network transducer (RNN-T) model with language bias to alleviate the problem.
We use the language identities to bias the model to predict the CS points.
This promotes the model to learn the language identity information directly from transcription, and no additional LID model is needed.
arXiv Detail & Related papers (2020-02-19T12:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.