Emotions as abstract evaluation criteria in biological and artificial
intelligences
- URL: http://arxiv.org/abs/2111.15275v1
- Date: Tue, 30 Nov 2021 10:49:04 GMT
- Title: Emotions as abstract evaluation criteria in biological and artificial
intelligences
- Authors: Claudius Gros
- Abstract summary: We propose a framework which mimics emotions on a functional level.
Based on time allocation via emotional stationarity (TAES), emotions are implemented as abstract criteria.
The long-term goal of the agent, to align experience with character, is achieved by optimizing the frequency for selecting individual tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biological as well as advanced artificial intelligences (AIs) need to decide
which goals to pursue. We review nature's solution to the time allocation
problem, which is based on a continuously readjusted categorical weighting
mechanism we experience introspectively as emotions. One observes
phylogenetically that the available number of emotional states increases hand
in hand with the cognitive capabilities of animals and that raising levels of
intelligence entail ever larger sets of behavioral options. Our ability to
experience a multitude of potentially conflicting feelings is in this view not
a leftover of a more primitive heritage, but a generic mechanism for
attributing values to behavioral options that can not be specified at birth. In
this view, emotions are essential for understanding the mind.
For concreteness, we propose and discuss a framework which mimics emotions on
a functional level. Based on time allocation via emotional stationarity (TAES),
emotions are implemented as abstract criteria, such as satisfaction, challenge
and boredom, which serve to evaluate activities that have been carried out. The
resulting timeline of experienced emotions is compared with the `character' of
the agent, which is defined in terms of a preferred distribution of emotional
states. The long-term goal of the agent, to align experience with character, is
achieved by optimizing the frequency for selecting individual tasks. Upon
optimization, the statistics of emotion experience becomes stationary.
Related papers
- Disentangle Identity, Cooperate Emotion: Correlation-Aware Emotional Talking Portrait Generation [63.94836524433559]
DICE-Talk is a framework for disentangling identity with emotion and cooperating emotions with similar characteristics.
We develop a disentangled emotion embedder that jointly models audio-visual emotional cues through cross-modal attention.
Second, we introduce a correlation-enhanced emotion conditioning module with learnable Emotion Banks.
Third, we design an emotion discrimination objective that enforces affective consistency during the diffusion process.
arXiv Detail & Related papers (2025-04-25T05:28:21Z) - AI with Emotions: Exploring Emotional Expressions in Large Language Models [0.0]
Large Language Models (LLMs) play role-play as agents answering questions with specified emotional states.
Russell's Circumplex model characterizes emotions along the sleepy-activated (arousal) and pleasure-displeasure (valence) axes.
evaluation showed that the emotional states of the generated answers were consistent with the specifications.
arXiv Detail & Related papers (2025-04-20T18:49:25Z) - Modelling Emotions in Face-to-Face Setting: The Interplay of Eye-Tracking, Personality, and Temporal Dynamics [1.4645774851707578]
In this study, we showcase how integrating eye-tracking data, temporal dynamics, and personality traits can substantially enhance the detection of both perceived and felt emotions.
Our findings inform the design of future affective computing and human-agent systems.
arXiv Detail & Related papers (2025-03-18T13:15:32Z) - Enhancing Emotional Generation Capability of Large Language Models via Emotional Chain-of-Thought [50.13429055093534]
Large Language Models (LLMs) have shown remarkable performance in various emotion recognition tasks.
We propose the Emotional Chain-of-Thought (ECoT) to enhance the performance of LLMs on various emotional generation tasks.
arXiv Detail & Related papers (2024-01-12T16:42:10Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - x-enVENT: A Corpus of Event Descriptions with Experiencer-specific
Emotion and Appraisal Annotations [13.324006587838523]
We argue that a classification setup for emotion analysis should be performed in an integrated manner, including the different semantic roles that participate in an emotion episode.
Based on appraisal theories in psychology, we compile an English corpus of written event descriptions.
The descriptions depict emotion-eliciting circumstances, and they contain mentions of people who responded emotionally.
arXiv Detail & Related papers (2022-03-21T12:02:06Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - A Multi-Componential Approach to Emotion Recognition and the Effect of
Personality [0.0]
This paper applies a componential framework with a data-driven approach to characterize emotional experiences evoked during movie watching.
The results suggest that differences between various emotions can be captured by a few (at least 6) latent dimensions.
Results show that a componential model with a limited number of descriptors is still able to predict the level of experienced discrete emotion.
arXiv Detail & Related papers (2020-10-22T01:27:23Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.