Emotion-Regularized Conditional Variational Autoencoder for Emotional
Response Generation
- URL: http://arxiv.org/abs/2104.08857v1
- Date: Sun, 18 Apr 2021 13:53:20 GMT
- Title: Emotion-Regularized Conditional Variational Autoencoder for Emotional
Response Generation
- Authors: Yu-Ping Ruan, and Zhen-Hua Ling
- Abstract summary: This paper presents an emotion-regularized conditional variational autoencoder (Emo-CVAE) model for generating emotional conversation responses.
Experimental results show that our Emo-CVAE model can learn a more informative and structured latent space than a conventional CVAE model.
- Score: 39.392929591449885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an emotion-regularized conditional variational
autoencoder (Emo-CVAE) model for generating emotional conversation responses.
In conventional CVAE-based emotional response generation, emotion labels are
simply used as additional conditions in prior, posterior and decoder networks.
Considering that emotion styles are naturally entangled with semantic contents
in the language space, the Emo-CVAE model utilizes emotion labels to regularize
the CVAE latent space by introducing an extra emotion prediction network. In
the training stage, the estimated latent variables are required to predict the
emotion labels and token sequences of the input responses simultaneously.
Experimental results show that our Emo-CVAE model can learn a more informative
and structured latent space than a conventional CVAE model and output responses
with better content and emotion performance than baseline CVAE and
sequence-to-sequence (Seq2Seq) models.
Related papers
- Expansion Quantization Network: An Efficient Micro-emotion Annotation and Detection Framework [2.0209172586699173]
We propose an all-labels and training-set label regression method to map label values to energy intensity levels.
This led to the establishment of the Emotion Quantization Network (EQN) framework for micro-emotion detection and annotation.
The EQN framework is the first to achieve automatic micro-emotion annotation with energy-level scores.
arXiv Detail & Related papers (2024-11-09T12:09:26Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Inter Subject Emotion Recognition Using Spatio-Temporal Features From
EEG Signal [4.316570025748204]
This work is about an easy-to-implement emotion recognition model that classifies emotions from EEG signals subject independently.
The model is a combination of regular, depthwise and separable convolution layers of CNN to classify the emotions.
The model achieved an accuracy of 73.04%.
arXiv Detail & Related papers (2023-05-27T07:43:19Z) - EmotionIC: emotional inertia and contagion-driven dependency modeling for emotion recognition in conversation [34.24557248359872]
We propose an emotional inertia and contagion-driven dependency modeling approach (EmotionIC) for ERC task.
Our EmotionIC consists of three main components, i.e., Identity Masked Multi-Head Attention (IMMHA), Dialogue-based Gated Recurrent Unit (DiaGRU) and Skip-chain Conditional Random Field (SkipCRF)
Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets.
arXiv Detail & Related papers (2023-03-20T13:58:35Z) - EmoCaps: Emotion Capsule based Model for Conversational Emotion
Recognition [2.359022633145476]
Emotion recognition in conversation (ERC) aims to analyze the speaker's state and identify their emotion in the conversation.
Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency.
We propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule.
arXiv Detail & Related papers (2022-03-25T08:42:57Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Towards a Unified Framework for Emotion Analysis [12.369106010767283]
EmoCoder is a modular encoder-decoder architecture that generalizes emotion analysis over different tasks.
EmoCoder learns an interpretable language-independent representation of emotions.
arXiv Detail & Related papers (2020-12-01T00:54:13Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.