Generative agent-based modeling with actions grounded in physical,
social, or digital space using Concordia
- URL: http://arxiv.org/abs/2312.03664v2
- Date: Wed, 13 Dec 2023 20:13:25 GMT
- Title: Generative agent-based modeling with actions grounded in physical,
social, or digital space using Concordia
- Authors: Alexander Sasha Vezhnevets, John P. Agapiou, Avia Aharon, Ron Ziv,
Jayd Matyas, Edgar A. Du\'e\~nez-Guzm\'an, William A. Cunningham, Simon
Osindero, Danny Karmon, Joel Z. Leibo
- Abstract summary: Generative Agent-Based Models (GABM) are not just classic Agent-Based Models (ABM)
GABMs are constructed using an LLM to apply common sense to situations, act "reasonably", recall common semantic knowledge, produce API calls to control digital technologies like apps, and communicate both within the simulation and to researchers viewing it from the outside.
Here we present Concordia, a library to facilitate constructing and working with GABMs.
- Score: 40.82479045442217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agent-based modeling has been around for decades, and applied widely across
the social and natural sciences. The scope of this research method is now
poised to grow dramatically as it absorbs the new affordances provided by Large
Language Models (LLM)s. Generative Agent-Based Models (GABM) are not just
classic Agent-Based Models (ABM)s where the agents talk to one another. Rather,
GABMs are constructed using an LLM to apply common sense to situations, act
"reasonably", recall common semantic knowledge, produce API calls to control
digital technologies like apps, and communicate both within the simulation and
to researchers viewing it from the outside. Here we present Concordia, a
library to facilitate constructing and working with GABMs. Concordia makes it
easy to construct language-mediated simulations of physically- or
digitally-grounded environments. Concordia agents produce their behavior using
a flexible component system which mediates between two fundamental operations:
LLM calls and associative memory retrieval. A special agent called the Game
Master (GM), which was inspired by tabletop role-playing games, is responsible
for simulating the environment where the agents interact. Agents take actions
by describing what they want to do in natural language. The GM then translates
their actions into appropriate implementations. In a simulated physical world,
the GM checks the physical plausibility of agent actions and describes their
effects. In digital environments simulating technologies such as apps and
services, the GM may handle API calls to integrate with external tools such as
general AI assistants (e.g., Bard, ChatGPT), and digital apps (e.g., Calendar,
Email, Search, etc.). Concordia was designed to support a wide array of
applications both in scientific research and for evaluating performance of real
digital services by simulating users and/or generating synthetic data.
Related papers
- Multi-Actor Generative Artificial Intelligence as a Game Engine [49.360775442760314]
Generative AI can be used in multi-actor environments with purposes ranging from social science modeling to interactive narrative and AI evaluation.<n>We argue here that a good approach is to take inspiration from tabletop role-playing games (TTRPGs), where a Game Master (GM) is responsible for the environment and generates all parts of the story not directly determined by the voluntary actions of player characters.
arXiv Detail & Related papers (2025-07-10T22:31:09Z) - Generative Agents for Multi-Agent Autoformalization of Interaction Scenarios [3.5083201638203154]
This work introduces the Generative Agents for Multi-Agent Autoformalization (GAMA) framework.<n>GAMA automates the formalization of interaction scenarios in simulations using agents augmented with large language models (LLMs)<n>In experiments with 110 natural language descriptions across five 2x2 simultaneous-move games, GAMA achieves 100% syntactic and 76.5% semantic correctness.
arXiv Detail & Related papers (2024-12-11T22:37:45Z) - Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models [36.571597246832326]
Large Language Models (LLMs) have increasingly demonstrated the ability to facilitate the development of multi-agent systems.
This paper aims to integrate agents & world interaction into a single simulation where multiple agents can work together to solve a problem.
We implement two simulations: a physical studio apartment with two roommates, and another where agents collaborate to complete a programming task.
arXiv Detail & Related papers (2024-09-14T21:53:35Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - MineLand: Simulating Large-Scale Multi-Agent Interactions with Limited Multimodal Senses and Physical Needs [12.987019067098414]
We propose a multi-agent Minecraft simulator, MineLand, that bridges this gap by introducing three key features: large-scale scalability, limited multimodal senses, and physical needs.
Our simulator supports 64 or more agents. Agents have limited visual, auditory, and environmental awareness, forcing them to actively communicate and collaborate to fulfill physical needs like food and resources.
Our experiments demonstrate that the simulator, the corresponding benchmark, and the AI agent framework contribute to more ecological and nuanced collective behavior.
arXiv Detail & Related papers (2024-03-28T09:53:41Z) - Agents: An Open-source Framework for Autonomous Language Agents [98.91085725608917]
We consider language agents as a promising direction towards artificial general intelligence.
We release Agents, an open-source library with the goal of opening up these advances to a wider non-specialist audience.
arXiv Detail & Related papers (2023-09-14T17:18:25Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.