Emotion Twenty Questions Dialog System for Lexical Emotional
Intelligence
- URL: http://arxiv.org/abs/2210.02400v1
- Date: Wed, 5 Oct 2022 17:20:26 GMT
- Title: Emotion Twenty Questions Dialog System for Lexical Emotional
Intelligence
- Authors: Abe Kazemzadeh and Adedamola Sanusi and Huihui (Summer) Nie
- Abstract summary: This paper presents a web-based demonstration of Emotion Twenty Questions (EMO20Q), a dialog game whose purpose is to study how people describe emotions.
EMO20Q can also be used to develop artificially intelligent dialog agents that can play the game.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper presents a web-based demonstration of Emotion Twenty Questions
(EMO20Q), a dialog game whose purpose is to study how people describe emotions.
EMO20Q can also be used to develop artificially intelligent dialog agents that
can play the game. In previous work, an EMO20Q agent used a sequential Bayesian
machine learning model and could play the question-asking role. Newer
transformer-based neural machine learning models have made it possible to
develop an agent for the question-answering role.
This demo paper describes the recent developments in the question-answering
role of the EMO20Q game, which requires the agent to respond to more open-ended
inputs. Furthermore, we also describe the design of the system, including the
web-based front-end, agent architecture and programming, and updates to earlier
software used.
The demo system will be available to collect pilot data during the ACII
conference and this data will be used to inform future experiments and system
design.
Related papers
- Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - EmoTwiCS: A Corpus for Modelling Emotion Trajectories in Dutch Customer
Service Dialogues on Twitter [9.2878798098526]
This paper introduces EmoTwiCS, a corpus of 9,489 Dutch customer service dialogues on Twitter that are annotated for emotion trajectories.
The term emotion trajectory' refers not only to the fine-grained emotions experienced by customers, but also to the event happening prior to the conversation and the responses made by the human operator.
arXiv Detail & Related papers (2023-10-10T11:31:11Z) - The System Model and the User Model: Exploring AI Dashboard Design [79.81291473899591]
We argue that sophisticated AI systems should have dashboards, just like all other complicated devices.
We conjecture that, for many systems, the two most important models will be of the user and of the system itself.
Finding ways to identify, interpret, and display these two models should be a core part of interface research for AI.
arXiv Detail & Related papers (2023-05-04T00:22:49Z) - Empathetic Response Generation with State Management [32.421924357260075]
The goal of empathetic response generation is to enhance the ability of dialogue systems to perceive and express emotions in conversations.
We propose a novel empathetic response generation model that can consider multiple state information including emotions and intents simultaneously.
Experimental results show that dynamically managing different information can help the model generate more empathetic responses.
arXiv Detail & Related papers (2022-05-07T16:17:28Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - An Appraisal Transition System for Event-driven Emotions in Agent-based
Player Experience Testing [9.26240699624761]
We propose an automated player experience testing approach by suggesting a formal model of event-based emotions.
A working prototype of the model is integrated on top of Aplib, a tactical agent programming library, to create intelligent PX test agents.
arXiv Detail & Related papers (2021-05-12T11:09:35Z) - A Commonsense Reasoning Framework for Explanatory Emotion Attribution,
Generation and Re-classification [3.464871689508835]
We present an explainable system for emotion attribution and recommendation (called DEGARI)
The system exploits the logic TCL to automatically generate novel commonsense semantic representations of compound emotions.
The generated emotions correspond to prototypes, i.e. commonsense representations of given concepts.
arXiv Detail & Related papers (2021-01-11T16:44:38Z) - The Adapter-Bot: All-In-One Controllable Conversational Model [66.48164003532484]
We propose a dialogue model that uses a fixed backbone model such as DialGPT and triggers on-demand dialogue skills via different adapters.
Depending on the skills, the model is able to process multiple knowledge types, such as text, tables, and emphatic responses.
We evaluate our model using automatic evaluation by comparing it with existing state-of-the-art conversational models.
arXiv Detail & Related papers (2020-08-28T10:59:31Z) - The BIRAFFE2 Experiment. Study in Bio-Reactions and Faces for
Emotion-based Personalization for AI Systems [0.0]
We present an unified paradigm allowing to capture emotional responses of different persons.
We provide a framework that can be easily used and developed for the purpose of the machine learning methods.
arXiv Detail & Related papers (2020-07-29T18:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.