Humanoid Agents: Platform for Simulating Human-like Generative Agents
- URL: http://arxiv.org/abs/2310.05418v1
- Date: Mon, 9 Oct 2023 05:30:42 GMT
- Title: Humanoid Agents: Platform for Simulating Human-like Generative Agents
- Authors: Zhilin Wang, Yu Ying Chiu, Yu Cheung Chiu
- Abstract summary: We propose a system that guides Generative Agents to behave more like humans by introducing three elements of System 1 processing.
Humanoid Agents are able to use these dynamic elements to adapt their daily conversations with other agents.
Our system is designed to be analytics to various settings, three of which we demonstrate, as well as to other elements influencing human behavior.
- Score: 3.259634592022173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Just as computational simulations of atoms, molecules and cells have shaped
the way we study the sciences, true-to-life simulations of human-like agents
can be valuable tools for studying human behavior. We propose Humanoid Agents,
a system that guides Generative Agents to behave more like humans by
introducing three elements of System 1 processing: Basic needs (e.g. hunger,
health and energy), Emotion and Closeness in Relationships. Humanoid Agents are
able to use these dynamic elements to adapt their daily activities and
conversations with other agents, as supported with empirical experiments. Our
system is designed to be extensible to various settings, three of which we
demonstrate, as well as to other elements influencing human behavior (e.g.
empathy, moral values and cultural background). Our platform also includes a
Unity WebGL game interface for visualization and an interactive analytics
dashboard to show agent statuses over time. Our platform is available on
https://www.humanoidagents.com/ and code is on
https://github.com/HumanoidAgents/HumanoidAgents
Related papers
- HumanPlus: Humanoid Shadowing and Imitation from Humans [82.47551890765202]
We introduce a full-stack system for humanoids to learn motion and autonomous skills from human data.
We first train a low-level policy in simulation via reinforcement learning using existing 40-hour human motion datasets.
We then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously.
arXiv Detail & Related papers (2024-06-15T00:41:34Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - EMOTE: An Explainable architecture for Modelling the Other Through
Empathy [26.85666453984719]
We design a simple architecture to model another agent's action-value function.
We learn an "Imagination Network" to transform the other agent's observed state.
This produces a human-interpretable "empathetic state" which, when presented to the learning agent, produces behaviours that mimic the other agent.
arXiv Detail & Related papers (2023-06-01T02:27:08Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - Improving Multimodal Interactive Agents with Reinforcement Learning from
Human Feedback [16.268581985382433]
An important goal in artificial intelligence is to create agents that can both interact naturally with humans and learn from their feedback.
Here we demonstrate how to use reinforcement learning from human feedback to improve upon simulated, embodied agents.
arXiv Detail & Related papers (2022-11-21T16:00:31Z) - Creating Multimodal Interactive Agents with Imitation and
Self-Supervised Learning [20.02604302565522]
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language.
Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time.
arXiv Detail & Related papers (2021-12-07T15:17:27Z) - Imitating Interactive Intelligence [24.95842455898523]
We study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
To build agents that can robustly interact with humans, we would ideally train them while they interact with humans.
We use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour.
arXiv Detail & Related papers (2020-12-10T13:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.