Few-Shot Emotion Recognition in Conversation with Sequential
Prototypical Networks
- URL: http://arxiv.org/abs/2109.09366v1
- Date: Mon, 20 Sep 2021 08:33:38 GMT
- Title: Few-Shot Emotion Recognition in Conversation with Sequential
Prototypical Networks
- Authors: Ga\"el Guibon (LTCI, IP Paris), Matthieu Labeau (LTCI, IP Paris),
H\'el\`ene Flamein, Luce Lefeuvre, Chlo\'e Clavel (LTCI, IP Paris)
- Abstract summary: We place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow.
We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels.
We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several recent studies on dyadic human-human interactions have been done on
conversations without specific business objectives. However, many companies
might benefit from studies dedicated to more precise environments such as after
sales services or customer satisfaction surveys. In this work, we place
ourselves in the scope of a live chat customer service in which we want to
detect emotions and their evolution in the conversation flow. This context
leads to multiple challenges that range from exploiting restricted, small and
mostly unlabeled datasets to finding and adapting methods for such context.We
tackle these challenges by using Few-Shot Learning while making the hypothesis
it can serve conversational emotion classification for different languages and
sparse labels. We contribute by proposing a variation of Prototypical Networks
for sequence labeling in conversation that we name ProtoSeq. We test this
method on two datasets with different languages: daily conversations in English
and customer service chat conversations in French. When applied to emotion
classification in conversations, our method proved to be competitive even when
compared to other ones.
Related papers
- Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task [3.489826905722736]
SemEval 2024 introduces the task of Multimodal Emotion Cause Analysis in Conversations.
This paper proposes models that tackle this task as an utterance labeling and a sequence labeling problem.
In the official leaderboard for the task, our architecture was ranked 8th, achieving an F1-score of 0.1759 on the leaderboard.
arXiv Detail & Related papers (2024-04-02T16:32:49Z) - Searching for Snippets of Open-Domain Dialogue in Task-Oriented Dialogue
Datasets [0.0]
chit-chat/opendomain dialogues focus on holding a socially engaging talk with a user.
Task-oriented dialogues portray functional goals, such as making a restaurant reservation or booking a plane ticket.
Our study shows that sequences related to social talk are indeed naturally present, motivating further research on ways chitchat is combined into task-oriented dialogues.
arXiv Detail & Related papers (2023-11-23T16:08:39Z) - EmoTwiCS: A Corpus for Modelling Emotion Trajectories in Dutch Customer
Service Dialogues on Twitter [9.2878798098526]
This paper introduces EmoTwiCS, a corpus of 9,489 Dutch customer service dialogues on Twitter that are annotated for emotion trajectories.
The term emotion trajectory' refers not only to the fine-grained emotions experienced by customers, but also to the event happening prior to the conversation and the responses made by the human operator.
arXiv Detail & Related papers (2023-10-10T11:31:11Z) - End-to-End Continuous Speech Emotion Recognition in Real-life Customer
Service Call Center Conversations [0.0]
We present our approach to constructing a large-scale reallife dataset (CusEmo) for continuous SER in customer service call center conversations.
We adopted the dimensional emotion annotation approach to capture the subtlety, complexity, and continuity of emotions in real-life call center conversations.
The study also addresses the challenges encountered during the application of the End-to-End (E2E) SER system to the dataset.
arXiv Detail & Related papers (2023-10-02T11:53:48Z) - Multiscale Contextual Learning for Speech Emotion Recognition in
Emergency Call Center Conversations [4.297070083645049]
This paper presents a multi-scale conversational context learning approach for speech emotion recognition.
We investigated this approach on both speech transcriptions and acoustic segments.
According to our tests, the context derived from previous tokens has a more significant influence on accurate prediction than the following tokens.
arXiv Detail & Related papers (2023-08-28T20:31:45Z) - PLACES: Prompting Language Models for Social Conversation Synthesis [103.94325597273316]
We use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting.
We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations.
arXiv Detail & Related papers (2023-02-07T05:48:16Z) - Knowledge-Grounded Conversational Data Augmentation with Generative
Conversational Networks [76.11480953550013]
We take a step towards automatically generating conversational data using Generative Conversational Networks.
We evaluate our approach on conversations with and without knowledge on the Topical Chat dataset.
arXiv Detail & Related papers (2022-07-22T22:37:14Z) - Chat-Capsule: A Hierarchical Capsule for Dialog-level Emotion Analysis [70.98130990040228]
We propose a Context-based Hierarchical Attention Capsule(Chat-Capsule) model, which models both utterance-level and dialog-level emotions and their interrelations.
On a dialog dataset collected from customer support of an e-commerce platform, our model is also able to predict user satisfaction and emotion curve category.
arXiv Detail & Related papers (2022-03-23T08:04:30Z) - Training Conversational Agents with Generative Conversational Networks [74.9941330874663]
We use Generative Conversational Networks to automatically generate data and train social conversational agents.
We evaluate our approach on TopicalChat with automatic metrics and human evaluators, showing that with 10% of seed data it performs close to the baseline that uses 100% of the data.
arXiv Detail & Related papers (2021-10-15T21:46:39Z) - COSMIC: COmmonSense knowledge for eMotion Identification in
Conversations [95.71018134363976]
We propose COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations.
We show that COSMIC achieves new state-of-the-art results for emotion recognition on four different benchmark conversational datasets.
arXiv Detail & Related papers (2020-10-06T15:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.