Towards Code-switched Classification Exploiting Constituent Language
Resources
- URL: http://arxiv.org/abs/2011.01913v1
- Date: Tue, 3 Nov 2020 18:43:19 GMT
- Title: Towards Code-switched Classification Exploiting Constituent Language
Resources
- Authors: Tanvi Dadu and Kartikey Pant
- Abstract summary: We convert code-switched data into constituent languages for exploiting both monolingual and cross-lingual settings.
We perform experiments for two downstream tasks, sarcasm detection and hate speech detection, in the English-Hindi code-switched setting.
- Score: 3.655021726150369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Code-switching is a commonly observed communicative phenomenon denoting a
shift from one language to another within the same speech exchange. The
analysis of code-switched data often becomes an assiduous task, owing to the
limited availability of data. We propose converting code-switched data into its
constituent high resource languages for exploiting both monolingual and
cross-lingual settings in this work. This conversion allows us to utilize the
higher resource availability for its constituent languages for multiple
downstream tasks.
We perform experiments for two downstream tasks, sarcasm detection and hate
speech detection, in the English-Hindi code-switched setting. These experiments
show an increase in 22% and 42.5% in F1-score for sarcasm detection and hate
speech detection, respectively, compared to the state-of-the-art.
Related papers
- Code-switching in text and speech reveals information-theoretic audience design [5.3329709073809095]
We use language modeling to investigate the factors that influence code-switching.
Code-switching occurs when a speaker alternates between one language variety (the primary language) and another (the secondary language)
arXiv Detail & Related papers (2024-08-08T17:14:12Z) - CoVoSwitch: Machine Translation of Synthetic Code-Switched Text Based on Intonation Units [0.0]
We synthesize code-switching data by replacing intonation units detected through PSST.
We evaluate the code-switching translation performance of two multilingual translation models, M2M-100 418M and NLLB-200 600M.
arXiv Detail & Related papers (2024-07-19T13:26:35Z) - Zero Resource Code-switched Speech Benchmark Using Speech Utterance Pairs For Multiple Spoken Languages [49.6922490267701]
We introduce a new zero resource code-switched speech benchmark designed to assess the code-switching capabilities of self-supervised speech encoders.
We showcase a baseline system of language modeling on discrete units to demonstrate how the code-switching abilities of speech encoders can be assessed.
arXiv Detail & Related papers (2023-10-04T17:58:11Z) - Simple yet Effective Code-Switching Language Identification with
Multitask Pre-Training and Transfer Learning [0.7242530499990028]
Code-switching is the linguistics phenomenon where in casual settings, multilingual speakers mix words from different languages in one utterance.
We propose two novel approaches toward improving language identification accuracy on an English-Mandarin child-directed speech dataset.
Our best model achieves a balanced accuracy of 0.781 on a real English-Mandarin code-switching child-directed speech corpus and outperforms the previous baseline by 55.3%.
arXiv Detail & Related papers (2023-05-31T11:43:16Z) - Code-Switching without Switching: Language Agnostic End-to-End Speech
Translation [68.8204255655161]
We treat speech recognition and translation as one unified end-to-end speech translation problem.
By training LAST with both input languages, we decode speech into one target language, regardless of the input language.
arXiv Detail & Related papers (2022-10-04T10:34:25Z) - LAE: Language-Aware Encoder for Monolingual and Multilingual ASR [87.74794847245536]
A novel language-aware encoder (LAE) architecture is proposed to handle both situations by disentangling language-specific information.
Experiments conducted on Mandarin-English code-switched speech suggest that the proposed LAE is capable of discriminating different languages in frame-level.
arXiv Detail & Related papers (2022-06-05T04:03:12Z) - Reducing language context confusion for end-to-end code-switching
automatic speech recognition [50.89821865949395]
We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model.
By calculating the respective attention of multiple languages, our method can efficiently transfer language knowledge from rich monolingual data.
arXiv Detail & Related papers (2022-01-28T14:39:29Z) - Multilingual and code-switching ASR challenges for low resource Indian
languages [59.2906853285309]
We focus on building multilingual and code-switching ASR systems through two different subtasks related to a total of seven Indian languages.
We provide a total of 600 hours of transcribed speech data, comprising train and test sets, in these languages.
We also provide a baseline recipe for both the tasks with a WER of 30.73% and 32.45% on the test sets of multilingual and code-switching subtasks, respectively.
arXiv Detail & Related papers (2021-04-01T03:37:01Z) - Transformer-Transducers for Code-Switched Speech Recognition [23.281314397784346]
We present an end-to-end ASR system using a transformer-transducer model architecture for code-switched speech recognition.
First, we introduce two auxiliary loss functions to handle the low-resource scenario of code-switching.
Second, we propose a novel mask-based training strategy with language ID information to improve the label encoder training towards intra-sentential code-switching.
arXiv Detail & Related papers (2020-11-30T17:27:41Z) - Meta-Transfer Learning for Code-Switched Speech Recognition [72.84247387728999]
We propose a new learning method, meta-transfer learning, to transfer learn on a code-switched speech recognition system in a low-resource setting.
Our model learns to recognize individual languages, and transfer them so as to better recognize mixed-language speech by conditioning the optimization on the code-switching data.
arXiv Detail & Related papers (2020-04-29T14:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.