Cross-lingual and Multilingual Spoken Term Detection for Low-Resource
Indian Languages
- URL: http://arxiv.org/abs/2011.06226v1
- Date: Thu, 12 Nov 2020 06:41:27 GMT
- Title: Cross-lingual and Multilingual Spoken Term Detection for Low-Resource
Indian Languages
- Authors: Sanket Shah, Satarupa Guha, Simran Khanuja, Sunayana Sitaram
- Abstract summary: Spoken Term Detection is the task of searching for words or phrases within audio, given either text or spoken input as a query.
We use state-of-the-art Hindi, Tamil and Telugu ASR systems cross-lingually for lexical Spoken Term Detection in ten low-resource Indian languages.
We show that it is possible to perform STD cross-lingually in a zero-shot manner without the need for any language-specific speech data.
- Score: 13.42517182688574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spoken Term Detection (STD) is the task of searching for words or phrases
within audio, given either text or spoken input as a query. In this work, we
use state-of-the-art Hindi, Tamil and Telugu ASR systems cross-lingually for
lexical Spoken Term Detection in ten low-resource Indian languages. Since no
publicly available dataset exists for Spoken Term Detection in these languages,
we create a new dataset using a publicly available TTS dataset. We report a
standard metric for STD, Mean Term Weighted Value (MTWV) and show that ASR
systems built in languages that are phonetically similar to the target
languages have higher accuracy, however, it is also possible to get high MTWV
scores for dissimilar languages by using a relaxed phone matching algorithm. We
propose a technique to bootstrap the Grapheme-to-Phoneme (g2p) mapping between
all the languages under consideration using publicly available resources. Gains
are obtained when we combine the output of multiple ASR systems and when we use
language-specific Language Models. We show that it is possible to perform STD
cross-lingually in a zero-shot manner without the need for any
language-specific speech data. We plan to make the STD dataset available for
other researchers interested in cross-lingual STD.
Related papers
- An Initial Investigation of Language Adaptation for TTS Systems under Low-resource Scenarios [76.11409260727459]
This paper explores the language adaptation capability of ZMM-TTS, a recent SSL-based multilingual TTS system.
We demonstrate that the similarity in phonetics between the pre-training and target languages, as well as the language category, affects the target language's adaptation performance.
arXiv Detail & Related papers (2024-06-13T08:16:52Z) - Zero-shot Sentiment Analysis in Low-Resource Languages Using a
Multilingual Sentiment Lexicon [78.12363425794214]
We focus on zero-shot sentiment analysis tasks across 34 languages, including 6 high/medium-resource languages, 25 low-resource languages, and 3 code-switching datasets.
We demonstrate that pretraining using multilingual lexicons, without using any sentence-level sentiment data, achieves superior zero-shot performance compared to models fine-tuned on English sentiment datasets.
arXiv Detail & Related papers (2024-02-03T10:41:05Z) - Visual Speech Recognition for Languages with Limited Labeled Data using
Automatic Labels from Whisper [96.43501666278316]
This paper proposes a powerful Visual Speech Recognition (VSR) method for multiple languages.
We employ a Whisper model which can conduct both language identification and audio-based speech recognition.
By comparing the performances of VSR models trained on automatic labels and the human-annotated labels, we show that we can achieve similar VSR performance to that of human-annotated labels.
arXiv Detail & Related papers (2023-09-15T16:53:01Z) - XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented
Languages [105.54207724678767]
Data scarcity is a crucial issue for the development of highly multilingual NLP systems.
We propose XTREME-UP, a benchmark defined by its focus on the scarce-data scenario rather than zero-shot.
XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies.
arXiv Detail & Related papers (2023-05-19T18:00:03Z) - Learning to Speak from Text: Zero-Shot Multilingual Text-to-Speech with
Unsupervised Text Pretraining [65.30528567491984]
This paper proposes a method for zero-shot multilingual TTS using text-only data for the target language.
The use of text-only data allows the development of TTS systems for low-resource languages.
Evaluation results demonstrate highly intelligible zero-shot TTS with a character error rate of less than 12% for an unseen language.
arXiv Detail & Related papers (2023-01-30T00:53:50Z) - ASR2K: Speech Recognition for Around 2000 Languages without Audio [100.41158814934802]
We present a speech recognition pipeline that does not require any audio for the target language.
Our pipeline consists of three components: acoustic, pronunciation, and language models.
We build speech recognition for 1909 languages by combining it with Crubadan: a large endangered languages n-gram database.
arXiv Detail & Related papers (2022-09-06T22:48:29Z) - Transferring Knowledge Distillation for Multilingual Social Event
Detection [42.663309895263666]
Recently published graph neural networks (GNNs) show promising performance at social event detection tasks.
We present a GNN that incorporates cross-lingual word embeddings for detecting events in multilingual data streams.
Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.
arXiv Detail & Related papers (2021-08-06T12:38:42Z) - MLS: A Large-Scale Multilingual Dataset for Speech Research [37.803100082550294]
The dataset is derived from read audiobooks from LibriVox.
It consists of 8 languages, including about 44.5K hours of English and a total of about 6K hours for other languages.
arXiv Detail & Related papers (2020-12-07T01:53:45Z) - Multilingual Jointly Trained Acoustic and Written Word Embeddings [22.63696520064212]
We extend this idea to multiple low-resource languages.
We jointly train an AWE model and an AGWE model, using phonetically transcribed data from multiple languages.
The pre-trained models can then be used for unseen zero-resource languages, or fine-tuned on data from low-resource languages.
arXiv Detail & Related papers (2020-06-24T19:16:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.