Multi-Task Learning of Generation and Classification for Emotion-Aware
Dialogue Response Generation
- URL: http://arxiv.org/abs/2105.11696v1
- Date: Tue, 25 May 2021 06:41:20 GMT
- Title: Multi-Task Learning of Generation and Classification for Emotion-Aware
Dialogue Response Generation
- Authors: Tatsuya Ide and Daisuke Kawahara
- Abstract summary: We propose a neural response generation model with multi-task learning of generation and classification, focusing on emotion.
Our model based on BART, a pre-trained transformer encoder-decoder model, is trained to generate responses and recognize emotions simultaneously.
- Score: 9.398596037077152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For a computer to naturally interact with a human, it needs to be human-like.
In this paper, we propose a neural response generation model with multi-task
learning of generation and classification, focusing on emotion. Our model based
on BART (Lewis et al., 2020), a pre-trained transformer encoder-decoder model,
is trained to generate responses and recognize emotions simultaneously.
Furthermore, we weight the losses for the tasks to control the update of
parameters. Automatic evaluations and crowdsourced manual evaluations show that
the proposed model makes generated responses more emotionally aware.
Related papers
- Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Computer Vision Estimation of Emotion Reaction Intensity in the Wild [1.5481864635049696]
We describe our submission to the newly introduced Emotional Reaction Intensity (ERI) Estimation challenge.
We developed four deep neural networks trained in the visual domain and a multimodal model trained with both visual and audio features to predict emotion reaction intensity.
arXiv Detail & Related papers (2023-03-19T19:09:41Z) - Masked World Models for Visual Control [90.13638482124567]
We introduce a visual model-based RL framework that decouples visual representation learning and dynamics learning.
We demonstrate that our approach achieves state-of-the-art performance on a variety of visual robotic tasks.
arXiv Detail & Related papers (2022-06-28T18:42:27Z) - Empathetic Response Generation with State Management [32.421924357260075]
The goal of empathetic response generation is to enhance the ability of dialogue systems to perceive and express emotions in conversations.
We propose a novel empathetic response generation model that can consider multiple state information including emotions and intents simultaneously.
Experimental results show that dynamically managing different information can help the model generate more empathetic responses.
arXiv Detail & Related papers (2022-05-07T16:17:28Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation [6.557082555839738]
We propose an emotion-aware transformer encoder for capturing the emotional quotient in the user utterance.
An emotion detector module determines the affective state of the user in the initial phase.
A novel transformer encoder is proposed that adds and normalizes the word embedding with emotion embedding.
arXiv Detail & Related papers (2022-04-24T17:05:36Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Affect-Driven Modelling of Robot Personality for Collaborative
Human-Robot Interactions [16.40684407420441]
Collaborative interactions require social robots to adapt to the dynamics of human affective behaviour.
We propose a novel framework for personality-driven behaviour generation in social robots.
arXiv Detail & Related papers (2020-10-14T16:34:14Z) - The BIRAFFE2 Experiment. Study in Bio-Reactions and Faces for
Emotion-based Personalization for AI Systems [0.0]
We present an unified paradigm allowing to capture emotional responses of different persons.
We provide a framework that can be easily used and developed for the purpose of the machine learning methods.
arXiv Detail & Related papers (2020-07-29T18:35:34Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.