An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language
Model Game Agents
- URL: http://arxiv.org/abs/2309.05076v1
- Date: Sun, 10 Sep 2023 16:55:49 GMT
- Title: An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language
Model Game Agents
- Authors: Maximilian Croissant, Madeleine Frister, Guy Schofield, Cade McCall
- Abstract summary: This study tests the capabilities of large language models to solve emotional intelligence tasks and to simulate emotions.
It presents and evaluates a new chain-of-emotion architecture for emotion simulation within video games, based on psychological appraisal research.
- Score: 0.40964539027092906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of believable, natural, and interactive digital artificial
agents is a field of growing interest. Theoretical uncertainties and technical
barriers present considerable challenges to the field, particularly with
regards to developing agents that effectively simulate human emotions. Large
language models (LLMs) might address these issues by tapping common patterns in
situational appraisal. In three empirical experiments, this study tests the
capabilities of LLMs to solve emotional intelligence tasks and to simulate
emotions. It presents and evaluates a new chain-of-emotion architecture for
emotion simulation within video games, based on psychological appraisal
research. Results show that it outperforms standard LLM architectures on a
range of user experience and content analysis metrics. This study therefore
provides early evidence of how to construct and test affective agents based on
cognitive processes represented in language models.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Recent Advancement of Emotion Cognition in Large Language Models [40.23093997384297]
Emotion cognition in large language models (LLMs) is crucial for enhancing performance across various applications.
We explore the current landscape of research, which primarily revolves around emotion classification, emotionally rich response generation, and Theory of Mind assessments.
arXiv Detail & Related papers (2024-09-20T09:34:58Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Human Simulacra: Benchmarking the Personification of Large Language Models [38.21708264569801]
Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence.
This paper introduces a framework for constructing virtual characters' life stories from the ground up.
Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters.
arXiv Detail & Related papers (2024-02-28T09:11:14Z) - Enhancing Emotional Generation Capability of Large Language Models via Emotional Chain-of-Thought [50.13429055093534]
Large Language Models (LLMs) have shown remarkable performance in various emotion recognition tasks.
We propose the Emotional Chain-of-Thought (ECoT) to enhance the performance of LLMs on various emotional generation tasks.
arXiv Detail & Related papers (2024-01-12T16:42:10Z) - Evaluating Subjective Cognitive Appraisals of Emotions from Large
Language Models [47.890846082224066]
This work fills the gap by presenting CovidET-Appraisals, the most comprehensive dataset to-date that assesses 24 appraisal dimensions.
CovidET-Appraisals presents an ideal testbed to evaluate the ability of large language models to automatically assess and explain cognitive appraisals.
arXiv Detail & Related papers (2023-10-22T19:12:17Z) - Implicit Design Choices and Their Impact on Emotion Recognition Model
Development and Evaluation [5.534160116442057]
The subjectivity of emotions poses significant challenges in developing accurate and robust computational models.
This thesis examines critical facets of emotion recognition, beginning with the collection of diverse datasets.
To handle the challenge of non-representative training data, this work collects the Multimodal Stressed Emotion dataset.
arXiv Detail & Related papers (2023-09-06T02:45:42Z) - A Hierarchical Regression Chain Framework for Affective Vocal Burst
Recognition [72.36055502078193]
We propose a hierarchical framework, based on chain regression models, for affective recognition from vocal bursts.
To address the challenge of data sparsity, we also use self-supervised learning (SSL) representations with layer-wise and temporal aggregation modules.
The proposed systems participated in the ACII Affective Vocal Burst (A-VB) Challenge 2022 and ranked first in the "TWO'' and "CULTURE" tasks.
arXiv Detail & Related papers (2023-03-14T16:08:45Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.