Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation
- URL: http://arxiv.org/abs/2204.11320v1
- Date: Sun, 24 Apr 2022 17:05:36 GMT
- Title: Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation
- Authors: Raman Goel, Seba Susan, Sachin Vashisht, and Armaan Dhanda
- Abstract summary: We propose an emotion-aware transformer encoder for capturing the emotional quotient in the user utterance.
An emotion detector module determines the affective state of the user in the initial phase.
A novel transformer encoder is proposed that adds and normalizes the word embedding with emotion embedding.
- Score: 6.557082555839738
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Modern day conversational agents are trained to emulate the manner in which
humans communicate. To emotionally bond with the user, these virtual agents
need to be aware of the affective state of the user. Transformers are the
recent state of the art in sequence-to-sequence learning that involves training
an encoder-decoder model with word embeddings from utterance-response pairs. We
propose an emotion-aware transformer encoder for capturing the emotional
quotient in the user utterance in order to generate human-like empathetic
responses. The contributions of our paper are as follows: 1) An emotion
detector module trained on the input utterances determines the affective state
of the user in the initial phase 2) A novel transformer encoder is proposed
that adds and normalizes the word embedding with emotion embedding thereby
integrating the semantic and affective aspects of the input utterance 3) The
encoder and decoder stacks belong to the Transformer-XL architecture which is
the recent state of the art in language modeling. Experimentation on the
benchmark Facebook AI empathetic dialogue dataset confirms the efficacy of our
model from the higher BLEU-4 scores achieved for the generated responses as
compared to existing methods. Emotionally intelligent virtual agents are now a
reality and inclusion of affect as a modality in all human-machine interfaces
is foreseen in the immediate future.
Related papers
- Multi-Modal Emotion Recognition by Text, Speech and Video Using
Pretrained Transformers [1.0152838128195467]
Three input modalities, namely text, audio (speech), and video, are employed to generate multimodal feature vectors.
For generating features for each of these modalities, pre-trained Transformer models with fine-tuning are utilized.
The best model, which combines feature-level fusion by concatenating feature vectors and classification using a Support Vector Machine, achieves an accuracy of 75.42%.
arXiv Detail & Related papers (2024-02-11T23:27:24Z) - Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Neural-Logic Human-Object Interaction Detection [67.4993347702353]
We present L OGIC HOI, a new HOI detector that leverages neural-logic reasoning and Transformer to infer feasible interactions between entities.
Specifically, we modify the self-attention mechanism in vanilla Transformer, enabling it to reason over the human, action, object> triplet and constitute novel interactions.
We formulate these two properties in first-order logic and ground them into continuous space to constrain the learning process of our approach, leading to improved performance and zero-shot generalization capabilities.
arXiv Detail & Related papers (2023-11-16T11:47:53Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Multimodal Emotion Recognition with High-level Speech and Text Features [8.141157362639182]
We propose a novel cross-representation speech model to perform emotion recognition on wav2vec 2.0 speech features.
We also train a CNN-based model to recognize emotions from text features extracted with Transformer-based models.
Our method is evaluated on the IEMOCAP dataset in a 4-class classification problem.
arXiv Detail & Related papers (2021-09-29T07:08:40Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - Multi-Task Learning of Generation and Classification for Emotion-Aware
Dialogue Response Generation [9.398596037077152]
We propose a neural response generation model with multi-task learning of generation and classification, focusing on emotion.
Our model based on BART, a pre-trained transformer encoder-decoder model, is trained to generate responses and recognize emotions simultaneously.
arXiv Detail & Related papers (2021-05-25T06:41:20Z) - Emotion Eliciting Machine: Emotion Eliciting Conversation Generation
based on Dual Generator [18.711852474600143]
We study the problem of positive emotion elicitation, which aims to generate responses that can elicit positive emotion of the user.
We propose a weakly supervised Emotion Eliciting Machine (EEM) to address this problem.
EEM outperforms the existing models in generating responses with positive emotion elicitation.
arXiv Detail & Related papers (2021-05-18T03:19:25Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice
Conversion [83.14445041096523]
Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.
We propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data.
Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.
arXiv Detail & Related papers (2020-05-13T13:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.