HICEM: A High-Coverage Emotion Model for Artificial Emotional
Intelligence
- URL: http://arxiv.org/abs/2206.07593v1
- Date: Wed, 15 Jun 2022 15:21:30 GMT
- Title: HICEM: A High-Coverage Emotion Model for Artificial Emotional
Intelligence
- Authors: Benjamin Wortman and James Z. Wang
- Abstract summary: Next-generation artificial emotional intelligence (AEI) is taking center stage to address users' desire for deeper, more meaningful human-machine interaction.
Unlike theory of emotion, which has been the historical focus in psychology, emotion models are a descriptive tools.
This work has broad implications in social robotics, human-machine interaction, mental healthcare, and computational psychology.
- Score: 9.153146173929935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As social robots and other intelligent machines enter the home, artificial
emotional intelligence (AEI) is taking center stage to address users' desire
for deeper, more meaningful human-machine interaction. To accomplish such
efficacious interaction, the next-generation AEI need comprehensive human
emotion models for training. Unlike theory of emotion, which has been the
historical focus in psychology, emotion models are a descriptive tools. In
practice, the strongest models need robust coverage, which means defining the
smallest core set of emotions from which all others can be derived. To achieve
the desired coverage, we turn to word embeddings from natural language
processing. Using unsupervised clustering techniques, our experiments show that
with as few as 15 discrete emotion categories, we can provide maximum coverage
across six major languages--Arabic, Chinese, English, French, Spanish, and
Russian. In support of our findings, we also examine annotations from two
large-scale emotion recognition datasets to assess the validity of existing
emotion models compared to human perception at scale. Because robust,
comprehensive emotion models are foundational for developing real-world
affective computing applications, this work has broad implications in social
robotics, human-machine interaction, mental healthcare, and computational
psychology.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - CAPE: A Chinese Dataset for Appraisal-based Emotional Generation using Large Language Models [30.40159858361768]
We introduce a two-stage automatic data generation framework to create CAPE, a Chinese dataset named Cognitive Appraisal theory-based Emotional corpus.
This corpus facilitates the generation of dialogues with contextually appropriate emotional responses by accounting for diverse personal and situational factors.
Our study shows the potential for advancing emotional expression in conversational agents, paving the way for more nuanced and meaningful human-computer interactions.
arXiv Detail & Related papers (2024-10-18T03:33:18Z) - Emotion Detection through Body Gesture and Face [0.0]
The project addresses the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures.
Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language.
The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way.
arXiv Detail & Related papers (2024-07-13T15:15:50Z) - ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Modeling emotion for human-like behavior in future intelligent robots [0.913755431537592]
We show how neuroscience can help advance the current state of the art.
We argue that a stronger integration of emotion-related processes in robot models is critical for the design of human-like behavior.
arXiv Detail & Related papers (2020-09-30T17:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.