Empaths at SemEval-2025 Task 11: Retrieval-Augmented Approach to Perceived Emotions Prediction
- URL: http://arxiv.org/abs/2506.04409v1
- Date: Wed, 04 Jun 2025 19:41:24 GMT
- Title: Empaths at SemEval-2025 Task 11: Retrieval-Augmented Approach to Perceived Emotions Prediction
- Authors: Lev Morozov, Aleksandr Mogilevskii, Alexander Shirnin,
- Abstract summary: EmoRAG is a system designed to detect perceived emotions in text for SemEval-2025 Task 11, Subtask A: Multi-label Emotion Detection.<n>We focus on predicting the perceived emotions of the speaker from a given text snippet, labeling it with emotions such as joy, sadness, fear, anger, surprise, and disgust.
- Score: 83.88591755871734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes EmoRAG, a system designed to detect perceived emotions in text for SemEval-2025 Task 11, Subtask A: Multi-label Emotion Detection. We focus on predicting the perceived emotions of the speaker from a given text snippet, labeling it with emotions such as joy, sadness, fear, anger, surprise, and disgust. Our approach does not require additional model training and only uses an ensemble of models to predict emotions. EmoRAG achieves results comparable to the best performing systems, while being more efficient, scalable, and easier to implement.
Related papers
- Team A at SemEval-2025 Task 11: Breaking Language Barriers in Emotion Detection with Multilingual Models [0.06138671548064355]
This paper describes the system submitted by Team A to SemEval 2025 Task 11, Bridging the Gap in Text-Based Emotion Detection''<n>The task involved identifying the perceived emotion of a speaker from text snippets, with each instance annotated with one of six emotions: joy, sadness, fear, anger, surprise, or disgust.<n>Among the various approaches explored, the best performance was achieved using multilingual embeddings combined with a fully connected layer.
arXiv Detail & Related papers (2025-02-27T07:59:01Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - CTSM: Combining Trait and State Emotions for Empathetic Response Model [2.865464162057812]
Empathetic response generation endeavors to empower dialogue systems to perceive speakers' emotions and generate empathetic responses accordingly.
We propose Combining Trait and State emotions for Empathetic Response Model (CTSM)
To sufficiently perceive emotions in dialogue, we first construct and encode trait and state emotion embeddings.
We further enhance emotional perception capability through an emotion guidance module that guides emotion representation.
arXiv Detail & Related papers (2024-03-22T10:45:13Z) - Enhancing Emotional Generation Capability of Large Language Models via Emotional Chain-of-Thought [50.13429055093534]
Large Language Models (LLMs) have shown remarkable performance in various emotion recognition tasks.
We propose the Emotional Chain-of-Thought (ECoT) to enhance the performance of LLMs on various emotional generation tasks.
arXiv Detail & Related papers (2024-01-12T16:42:10Z) - emotion2vec: Self-Supervised Pre-Training for Speech Emotion
Representation [42.29118614670941]
We propose emotion2vec, a universal speech emotion representation model.
emotion2vec is pre-trained on unlabeled emotion data through self-supervised online distillation.
It outperforms state-of-the-art pre-trained universal models and emotion specialist models.
arXiv Detail & Related papers (2023-12-23T07:46:55Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - VISU at WASSA 2023 Shared Task: Detecting Emotions in Reaction to News
Stories Leveraging BERT and Stacked Embeddings [3.797177597247675]
Our system, VISU, participated in the WASSA 2023 Shared Task (3) of Emotion Classification from essays written in reaction to news articles.
We have focused on developing deep learning (DL) models using the combination of word embedding representations with tailored prepossessing strategies.
arXiv Detail & Related papers (2023-07-27T19:42:22Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - EmoCaps: Emotion Capsule based Model for Conversational Emotion
Recognition [2.359022633145476]
Emotion recognition in conversation (ERC) aims to analyze the speaker's state and identify their emotion in the conversation.
Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency.
We propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule.
arXiv Detail & Related papers (2022-03-25T08:42:57Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.