Exploiting Pseudo Future Contexts for Emotion Recognition in
Conversations
- URL: http://arxiv.org/abs/2306.15376v1
- Date: Tue, 27 Jun 2023 10:51:02 GMT
- Title: Exploiting Pseudo Future Contexts for Emotion Recognition in
Conversations
- Authors: Yinyi Wei, Shuaipeng Liu, Hailei Yan, Wei Ye, Tong Mo, Guanglu Wan
- Abstract summary: We generate pseudo future contexts to improve emotion recognition in conversations.
For an utterance, we generate its future context with pre-trained language models.
These characteristics make pseudo future contexts easily fused with historical contexts and historical speaker-specific contexts.
- Score: 3.3961757428667925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the extensive accumulation of conversational data on the Internet,
emotion recognition in conversations (ERC) has received increasing attention.
Previous efforts of this task mainly focus on leveraging contextual and
speaker-specific features, or integrating heterogeneous external commonsense
knowledge. Among them, some heavily rely on future contexts, which, however,
are not always available in real-life scenarios. This fact inspires us to
generate pseudo future contexts to improve ERC. Specifically, for an utterance,
we generate its future context with pre-trained language models, potentially
containing extra beneficial knowledge in a conversational form homogeneous with
the historical ones. These characteristics make pseudo future contexts easily
fused with historical contexts and historical speaker-specific contexts,
yielding a conceptually simple framework systematically integrating
multi-contexts. Experimental results on four ERC datasets demonstrate our
method's superiority. Further in-depth analyses reveal that pseudo future
contexts can rival real ones to some extent, especially in relatively
context-independent conversations.
Related papers
- Thread of Thought Unraveling Chaotic Contexts [133.24935874034782]
"Thread of Thought" (ThoT) strategy draws inspiration from human cognitive processes.
In experiments, ThoT significantly improves reasoning performance compared to other prompting techniques.
arXiv Detail & Related papers (2023-11-15T06:54:44Z) - History-Aware Hierarchical Transformer for Multi-session Open-domain
Dialogue System [59.78425104243993]
We propose History-Aware Hierarchical Transformer (HAHT) for multi-session open-domain dialogue.
HAHT maintains a long-term memory of history conversations and utilizes history information to understand current conversation context.
Experimental results on a large-scale Multi-Session Conversation dataset suggest that the proposed HAHT model consistently outperforms baseline models.
arXiv Detail & Related papers (2023-02-02T06:54:33Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - Precognition in Task-oriented Dialogue Understanding: Posterior
Regularization by Future Context [8.59600111891194]
We propose to jointly model historical and future information through the posterior regularization method.
We optimize the KL distance between these to regularize our model during training.
Experiments on two dialogue datasets validate the effectiveness of our proposed method.
arXiv Detail & Related papers (2022-03-07T09:58:50Z) - Conversational speech recognition leveraging effective fusion methods
for cross-utterance language modeling [12.153618111267514]
We put forward disparate conversation history fusion methods for language modeling in automatic speech recognition.
A novel audio-fusion mechanism is introduced, which manages to fuse and utilize the acoustic embeddings of a current utterance and the semantic content of its corresponding conversation history.
To flesh out our ideas, we frame the ASR N-best hypothesis rescoring task as a prediction problem, leveraging BERT, an iconic pre-trained LM.
arXiv Detail & Related papers (2021-11-05T09:07:23Z) - $C^3$: Compositional Counterfactual Contrastive Learning for
Video-grounded Dialogues [97.25466640240619]
Video-grounded dialogue systems aim to integrate video understanding and dialogue understanding to generate responses relevant to both the dialogue and video context.
Most existing approaches employ deep learning models and have achieved remarkable performance, given the relatively small datasets available.
We propose a novel approach of Compositional Counterfactual Contrastive Learning to develop contrastive training between factual and counterfactual samples in video-grounded dialogues.
arXiv Detail & Related papers (2021-06-16T16:05:27Z) - BERT Embeddings Can Track Context in Conversational Search [5.3222282321717955]
We develop a conversational search system that helps people search for information in a natural way.
System is able to understand the context where the question is posed, tracking the current state of the conversation and detecting mentions to previous questions and answers.
arXiv Detail & Related papers (2021-04-13T22:02:24Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Look Before you Speak: Visually Contextualized Utterances [88.58909442073858]
We create a task for predicting utterances in a video using both visual frames and transcribed speech as context.
By exploiting the large number of instructional videos online, we train a model to solve this task at scale, without the need for manual annotations.
Our model achieves state-of-the-art performance on a number of downstream VideoQA benchmarks.
arXiv Detail & Related papers (2020-12-10T14:47:02Z) - Regularizing Dialogue Generation by Imitating Implicit Scenarios [38.22638543470511]
We improve generative dialogue systems by taking into account dialogue history and future conversation.
The conventional dialogue model that has no access to future conversations is effectively regularized.
Our approach significantly outperforms state-of-the-art baselines on diversity and relevance.
arXiv Detail & Related papers (2020-10-05T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.