Can Generative Agents Predict Emotion?
- URL: http://arxiv.org/abs/2402.04232v2
- Date: Wed, 7 Feb 2024 17:27:09 GMT
- Title: Can Generative Agents Predict Emotion?
- Authors: Ciaran Regan, Nanami Iwahashi, Shogo Tanaka, Mizuki Oka
- Abstract summary: Large Language Models (LLMs) have demonstrated a number of human-like abilities, however the empathic understanding and emotional state of LLMs is yet to be aligned to that of humans.
We investigate how the emotional state of generative LLM agents evolves as they perceive new events, introducing a novel architecture in which new experiences are compared to past memories.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have demonstrated a number of human-like
abilities, however the empathic understanding and emotional state of LLMs is
yet to be aligned to that of humans. In this work, we investigate how the
emotional state of generative LLM agents evolves as they perceive new events,
introducing a novel architecture in which new experiences are compared to past
memories. Through this comparison, the agent gains the ability to understand
new experiences in context, which according to the appraisal theory of emotion
is vital in emotion creation. First, the agent perceives new experiences as
time series text data. After perceiving each new input, the agent generates a
summary of past relevant memories, referred to as the norm, and compares the
new experience to this norm. Through this comparison we can analyse how the
agent reacts to the new experience in context. The PANAS, a test of affect, is
administered to the agent, capturing the emotional state of the agent after the
perception of the new event. Finally, the new experience is then added to the
agents memory to be used in the creation of future norms. By creating multiple
experiences in natural language from emotionally charged situations, we test
the proposed architecture on a wide range of scenarios. The mixed results
suggests that introducing context can occasionally improve the emotional
alignment of the agent, but further study and comparison with human evaluators
is necessary. We hope that this paper is another step towards the alignment of
generative agents.
Related papers
- BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis [62.60458710368311]
This paper presents BattleAgent, an emulation system that combines the Large Vision-Language Model and Multi-agent System.
It aims to simulate complex dynamic interactions among multiple agents, as well as between agents and their environments.
It emulates both the decision-making processes of leaders and the viewpoints of ordinary participants, such as soldiers.
arXiv Detail & Related papers (2024-04-23T21:37:22Z) - Generative agents in the streets: Exploring the use of Large Language
Models (LLMs) in collecting urban perceptions [0.0]
This study explores the current advancements in Generative agents powered by large language models (LLMs)
The experiment employs Generative agents to interact with the urban environments using street view images to plan their journey toward specific goals.
Since LLMs do not possess embodiment, nor have access to the visual realm, and lack a sense of motion or direction, we designed movement and visual modules that help agents gain an overall understanding of surroundings.
arXiv Detail & Related papers (2023-12-20T15:45:54Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench [83.41621219298489]
We propose to evaluate the empathy ability of Large Language Models (LLMs)
We collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study.
We conduct a human evaluation involving more than 1,200 subjects worldwide.
arXiv Detail & Related papers (2023-08-07T15:18:30Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - e-Genia3 An AgentSpeak extension for empathic agents [0.0]
e-Genia3 is an extension of AgentSpeak to provide support to the development of empathic agents.
e-Genia3 modifies the agent's reasoning processes to select plans according to the analyzed event and the affective state and personality of the agent.
arXiv Detail & Related papers (2022-08-01T10:53:25Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Emotion Carrier Recognition from Personal Narratives [74.24768079275222]
Personal Narratives (PNs) are recollections of facts, events, and thoughts from one's own experience.
We propose a novel task for Narrative Understanding: Emotion Carrier Recognition (ECR)
arXiv Detail & Related papers (2020-08-17T17:16:08Z) - Conceptual Metaphors Impact Perceptions of Human-AI Collaboration [29.737986509769808]
We find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence.
A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases.
These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor.
arXiv Detail & Related papers (2020-08-05T18:39:56Z) - A Proposal for Intelligent Agents with Episodic Memory [0.9236074230806579]
We argue that an agent would benefit from an episodic memory.
This memory encodes the agent's experience in such a way that the agent can relive the experience.
We propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories.
arXiv Detail & Related papers (2020-05-07T00:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.