Semi-supervised cross-lingual speech emotion recognition
- URL: http://arxiv.org/abs/2207.06767v2
- Date: Mon, 17 Jul 2023 06:11:59 GMT
- Title: Semi-supervised cross-lingual speech emotion recognition
- Authors: Mirko Agarla, Simone Bianco, Luigi Celona, Paolo Napoletano, Alexey
Petrovsky, Flavio Piccoli, Raimondo Schettini, Ivan Shanin
- Abstract summary: Cross-lingual Speech Emotion Recognition remains a challenge in real-world applications.
We propose a Semi-Supervised Learning (SSL) method for cross-lingual emotion recognition when only few labeled examples in the target domain are available.
Our method is based on a Transformer and it adapts to the new domain by exploiting a pseudo-labeling strategy on the unlabeled utterances.
- Score: 26.544999411050036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performance in Speech Emotion Recognition (SER) on a single language has
increased greatly in the last few years thanks to the use of deep learning
techniques. However, cross-lingual SER remains a challenge in real-world
applications due to two main factors: the first is the big gap among the source
and the target domain distributions; the second factor is the major
availability of unlabeled utterances in contrast to the labeled ones for the
new language. Taking into account previous aspects, we propose a
Semi-Supervised Learning (SSL) method for cross-lingual emotion recognition
when only few labeled examples in the target domain (i.e. the new language) are
available. Our method is based on a Transformer and it adapts to the new domain
by exploiting a pseudo-labeling strategy on the unlabeled utterances. In
particular, the use of a hard and soft pseudo-labels approach is investigated.
We thoroughly evaluate the performance of the proposed method in a
speaker-independent setup on both the source and the new language and show its
robustness across five languages belonging to different linguistic strains. The
experimental findings indicate that the unweighted accuracy is increased by an
average of 40% compared to state-of-the-art methods.
Related papers
- Crowdsourcing Lexical Diversity [7.569845058082537]
This paper proposes a novel crowdsourcing methodology for reducing bias in lexicons.
Crowd workers compare lexemes from two languages, focusing on domains rich in lexical diversity, such as kinship or food.
We validated our method by applying it to two case studies focused on food-related terminology.
arXiv Detail & Related papers (2024-10-30T15:45:09Z) - A New Method for Cross-Lingual-based Semantic Role Labeling [5.992526851963307]
A deep learning algorithm is proposed to train semantic role labeling in English and Persian.
The results show significant improvements compared to Niksirt et al.'s model.
The development of cross-lingual methods for semantic role labeling holds promise.
arXiv Detail & Related papers (2024-08-28T16:06:12Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - Improving Self-training for Cross-lingual Named Entity Recognition with
Contrastive and Prototype Learning [80.08139343603956]
In cross-lingual named entity recognition, self-training is commonly used to bridge the linguistic gap.
In this work, we aim to improve self-training for cross-lingual NER by combining representation learning and pseudo label refinement.
Our proposed method, namely ContProto mainly comprises two components: (1) contrastive self-training and (2) prototype-based pseudo-labeling.
arXiv Detail & Related papers (2023-05-23T02:52:16Z) - Few-Shot Cross-Lingual Stance Detection with Sentiment-Based
Pre-Training [32.800766653254634]
We present the most comprehensive study of cross-lingual stance detection to date.
We use 15 diverse datasets in 12 languages from 6 language families.
For our experiments, we build on pattern-exploiting training, proposing the addition of a novel label encoder.
arXiv Detail & Related papers (2021-09-13T15:20:06Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - Unsupervised Cross-Lingual Speech Emotion Recognition Using
DomainAdversarial Neural Network [48.1535353007371]
Cross-domain Speech Emotion Recog-nition (SER) is still a challenging taskdue to the distribution shift between source and target domains.
We propose a Domain Adversarial Neural Net-work (DANN) based approach to mitigate this distribution shiftproblem for cross-lingual SER.
arXiv Detail & Related papers (2020-12-21T08:21:11Z) - Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text
Classification [52.69730591919885]
We present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations.
We observe significant gains in effectiveness on document and intent classification for a diverse set of languages.
arXiv Detail & Related papers (2020-07-29T19:38:35Z) - Robust Cross-lingual Embeddings from Parallel Sentences [65.85468628136927]
We propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word representations.
Our approach significantly improves crosslingual sentence retrieval performance over all other approaches.
It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task.
arXiv Detail & Related papers (2019-12-28T16:18:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.