Large Language Models for Cross-lingual Emotion Detection
- URL: http://arxiv.org/abs/2410.15974v1
- Date: Mon, 21 Oct 2024 13:00:09 GMT
- Title: Large Language Models for Cross-lingual Emotion Detection
- Authors: Ram Mohan Rao Kadiyala,
- Abstract summary: This paper presents a detailed system description of our entry for the WASSA 2024 Task 2, focused on cross-lingual emotion detection.
We utilize a combination of large language models (LLMs) and their ensembles to effectively understand and categorize emotions across different languages.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a detailed system description of our entry for the WASSA 2024 Task 2, focused on cross-lingual emotion detection. We utilized a combination of large language models (LLMs) and their ensembles to effectively understand and categorize emotions across different languages. Our approach not only outperformed other submissions with a large margin, but also demonstrated the strength of integrating multiple models to enhance performance. Additionally, We conducted a thorough comparison of the benefits and limitations of each model used. An error analysis is included along with suggested areas for future improvement. This paper aims to offer a clear and comprehensive understanding of advanced techniques in emotion detection, making it accessible even to those new to the field.
Related papers
- Team A at SemEval-2025 Task 11: Breaking Language Barriers in Emotion Detection with Multilingual Models [0.06138671548064355]
This paper describes the system submitted by Team A to SemEval 2025 Task 11, Bridging the Gap in Text-Based Emotion Detection''
The task involved identifying the perceived emotion of a speaker from text snippets, with each instance annotated with one of six emotions: joy, sadness, fear, anger, surprise, or disgust.
Among the various approaches explored, the best performance was achieved using multilingual embeddings combined with a fully connected layer.
arXiv Detail & Related papers (2025-02-27T07:59:01Z) - Evaluating the Capabilities of Large Language Models for Multi-label Emotion Understanding [20.581470997286146]
We present EthioEmo, a multi-label emotion classification dataset for four Ethiopian languages.
We perform extensive experiments with an additional English multi-label emotion dataset from SemEval 2018 Task 1.
The results show that accurate multi-label emotion classification is still insufficient even for high-resource languages.
arXiv Detail & Related papers (2024-12-17T07:42:39Z) - TEII: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detection [5.942385193284472]
Cross-lingual emotion detection allows us to analyze global trends, public opinion, and social phenomena at scale.
Our system outperformed the baseline by more than 0.16 F1-score absolute, and ranked second amongst competing systems.
arXiv Detail & Related papers (2024-05-27T12:47:40Z) - Understanding Cross-Lingual Alignment -- A Survey [52.572071017877704]
Cross-lingual alignment is the meaningful similarity of representations across languages in multilingual language models.
We survey the literature of techniques to improve cross-lingual alignment, providing a taxonomy of methods and summarising insights from throughout the field.
arXiv Detail & Related papers (2024-04-09T11:39:53Z) - Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings [22.71166607645311]
We introduce a novel suite of state-of-the-art bilingual text embedding models.
These models are capable of processing lengthy text inputs with up to 8192 tokens.
We have significantly improved the model performance on STS tasks.
We have expanded the Massive Text Embedding Benchmark to include benchmarks for German and Spanish embedding models.
arXiv Detail & Related papers (2024-02-26T20:53:12Z) - Retrofitting Light-weight Language Models for Emotions using Supervised
Contrastive Learning [44.17782674872344]
We present a novel method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa.
Our method updates pre-trained network weights using contrastive learning so that the text fragments exhibiting similar emotions are encoded nearby in the representation space.
arXiv Detail & Related papers (2023-10-29T07:43:34Z) - Improving Factuality and Reasoning in Language Models through Multiagent
Debate [95.10641301155232]
We present a complementary approach to improve language responses where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer.
Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks.
Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate.
arXiv Detail & Related papers (2023-05-23T17:55:11Z) - Exploring Dimensionality Reduction Techniques in Multilingual
Transformers [64.78260098263489]
This paper gives a comprehensive account of the impact of dimensional reduction techniques on the performance of state-of-the-art multilingual Siamese Transformers.
It shows that it is possible to achieve an average reduction in the number of dimensions of $91.58% pm 2.59%$ and $54.65% pm 32.20%$, respectively.
arXiv Detail & Related papers (2022-04-18T17:20:55Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation [93.80733419450225]
This paper analyzes the current state of cross-lingual transfer learning.
We extend XTREME to XTREME-R, which consists of an improved set of ten natural language understanding tasks.
arXiv Detail & Related papers (2021-04-15T12:26:12Z) - SpanEmo: Casting Multi-label Emotion Classification as Span-prediction [15.41237087996244]
We propose a new model "SpanEmo" casting multi-label emotion classification as span-prediction.
We introduce a loss function focused on modelling multiple co-existing emotions in the input sentence.
Experiments performed on the SemEval2018 multi-label emotion data over three language sets demonstrate our method's effectiveness.
arXiv Detail & Related papers (2021-01-25T12:11:04Z) - InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language
Model Pre-Training [135.12061144759517]
We present an information-theoretic framework that formulates cross-lingual language model pre-training.
We propose a new pre-training task based on contrastive learning.
By leveraging both monolingual and parallel corpora, we jointly train the pretext to improve the cross-lingual transferability of pre-trained models.
arXiv Detail & Related papers (2020-07-15T16:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.