SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
Conversation (EDiReF)
- URL: http://arxiv.org/abs/2402.18944v1
- Date: Thu, 29 Feb 2024 08:20:06 GMT
- Title: SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
Conversation (EDiReF)
- Authors: Shivani Kumar, Md Shad Akhtar, Erik Cambria, Tanmoy Chakraborty
- Abstract summary: SemEval-2024 Task 10 is a shared task centred on identifying emotions in code-mixed dialogues.
This task comprises three distinct subtasks - emotion recognition in conversation for code-mixed dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip reasoning for English dialogues.
A total of 84 participants engaged in this task, with the most adept systems attaining F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks.
- Score: 61.49972925493912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SemEval-2024 Task 10, a shared task centred on identifying
emotions and finding the rationale behind their flips within monolingual
English and Hindi-English code-mixed dialogues. This task comprises three
distinct subtasks - emotion recognition in conversation for code-mixed
dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip
reasoning for English dialogues. Participating systems were tasked to
automatically execute one or more of these subtasks. The datasets for these
tasks comprise manually annotated conversations focusing on emotions and
triggers for emotion shifts (The task data is available at
https://github.com/LCS2-IIITD/EDiReF-SemEval2024.git). A total of 84
participants engaged in this task, with the most adept systems attaining
F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks. This paper
summarises the results and findings from 24 teams alongside their system
descriptions.
Related papers
- MasonTigers at SemEval-2024 Task 10: Emotion Discovery and Flip Reasoning in Conversation with Ensemble of Transformers and Prompting [0.0]
We present MasonTigers' participation in SemEval-2024 Task 10, a shared task aimed at identifying emotions in code-mixed dialogues.
Our team, MasonTigers, contributed to each subtask, focusing on developing methods for accurate emotion recognition and reasoning.
We attained impressive F1-scores of 0.78 for the first task and 0.79 for both the second and third tasks.
arXiv Detail & Related papers (2024-06-30T03:59:04Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection [68.858931667807]
Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine.
Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM.
Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine.
arXiv Detail & Related papers (2024-04-22T13:56:07Z) - IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings [4.679320772294786]
We propose a transformer-based speaker-centric model for the Emotion Flip Reasoning task.
For sub-task 3, the proposed approach achieves a 5.9 (F1 score) improvement over the task baseline.
arXiv Detail & Related papers (2024-04-06T06:47:44Z) - LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task [3.489826905722736]
SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations.
This paper proposes models that tackle this task as an utterance labeling and a sequence labeling problem.
In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard.
arXiv Detail & Related papers (2024-04-02T16:32:49Z) - Learning from Emotions, Demographic Information and Implicit User
Feedback in Task-Oriented Document-Grounded Dialogues [59.516187851808375]
We introduce FEDI, the first English dialogue dataset for task-oriented document-grounded dialogues annotated with demographic information, user emotions and implicit feedback.
Our experiments with FLAN-T5, GPT-2 and LLaMA-2 show that these data have the potential to improve task completion and the factual consistency of the generated responses and user acceptance.
arXiv Detail & Related papers (2024-01-17T14:52:26Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion in
Task-Oriented Dialogue Systems [3.3010169113961325]
EmoWOZ is a large-scale manually emotion-annotated corpus of task-oriented dialogues.
It contains more than 11K dialogues with more than 83K emotion annotations of user utterances.
We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues.
arXiv Detail & Related papers (2021-09-10T15:00:01Z) - COSMIC: COmmonSense knowledge for eMotion Identification in
Conversations [95.71018134363976]
We propose COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations.
We show that COSMIC achieves new state-of-the-art results for emotion recognition on four different benchmark conversational datasets.
arXiv Detail & Related papers (2020-10-06T15:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.