Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning
- URL: http://arxiv.org/abs/2202.01691v1
- Date: Tue, 18 Jan 2022 20:54:00 GMT
- Title: Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning
- Authors: Tong Mu, Stephan Zheng, Alexander Trott
- Abstract summary: We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
- Score: 85.86440477005523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent reinforcement learning (MARL) is a powerful framework for
studying emergent behavior in complex agent-based simulations. However, RL
agents are often assumed to be rational and behave optimally, which does not
fully reflect human behavior. Here, we study more human-like RL agents which
incorporate an established model of human-irrationality, the Rational
Inattention (RI) model. RI models the cost of cognitive information processing
using mutual information. Our RIRL framework generalizes and is more flexible
than prior work by allowing for multi-timestep dynamics and information
channels with heterogeneous processing costs. We evaluate RIRL in
Principal-Agent (specifically manager-employee relations) problem settings of
varying complexity where RI models information asymmetry (e.g. it may be costly
for the manager to observe certain information about the employees). We show
that using RIRL yields a rich spectrum of new equilibrium behaviors that differ
from those found under rational assumptions. For instance, some forms of a
Principal's inattention can increase Agent welfare due to increased
compensation, while other forms of inattention can decrease Agent welfare by
encouraging extra work effort. Additionally, new strategies emerge compared to
those under rationality assumptions, e.g., Agents are incentivized to increase
work effort. These results suggest RIRL is a powerful tool towards building AI
agents that can mimic real human behavior.
Related papers
- On the limits of agency in agent-based models [13.130587222524305]
Agent-based modeling (ABM) seeks to understand the behavior of complex systems by simulating a collection of agents that act and interact within an environment.
Recent advancements in large language models (LLMs) present an opportunity to enhance ABMs.
We introduce AgentTorch -- a framework that scales ABMs to millions of agents while capturing high-resolution agent behavior using LLMs.
arXiv Detail & Related papers (2024-09-14T04:17:24Z) - Simulating the Economic Impact of Rationality through Reinforcement Learning and Agent-Based Modelling [1.7546137756031712]
We leverage multi-agent reinforcement learning (RL) to expand the capabilities of agent-based models (ABMs)
We show that RL agents spontaneously learn three distinct strategies for maximising profits, with the optimal strategy depending on the level of market competition and rationality.
We also find that RL agents with independent policies, and without the ability to communicate with each other, spontaneously learn to segregate into different strategic groups, thus increasing market power and overall profits.
arXiv Detail & Related papers (2024-05-03T15:08:25Z) - Learning and Calibrating Heterogeneous Bounded Rational Market Behaviour
with Multi-Agent Reinforcement Learning [4.40301653518681]
Agent-based models (ABMs) have shown promise for modelling various real world phenomena incompatible with traditional equilibrium analysis.
Recent developments in multi-agent reinforcement learning (MARL) offer a way to address this issue from a rationality perspective.
We propose a novel technique for representing heterogeneous processing-constrained agents within a MARL framework.
arXiv Detail & Related papers (2024-02-01T17:21:45Z) - Decision-Making Among Bounded Rational Agents [5.24482648010213]
We introduce the concept of bounded rationality from an information-theoretic view into the game-theoretic framework.
This allows the robots to reason other agents' sub-optimal behaviors and act accordingly under their computational constraints.
We demonstrate that the resulting framework allows the robots to reason about different levels of rational behaviors of other agents and compute a reasonable strategy under its computational constraint.
arXiv Detail & Related papers (2022-10-17T00:29:24Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in
Multi-Agent Simulations [110.72725220033983]
Epsilon-Robust Multi-Agent Simulation (ERMAS) is a framework for learning AI policies that are robust to such multiagent sim-to-real gaps.
ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
In particular, ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
arXiv Detail & Related papers (2021-06-10T04:32:20Z) - What is Going on Inside Recurrent Meta Reinforcement Learning Agents? [63.58053355357644]
Recurrent meta reinforcement learning (meta-RL) agents are agents that employ a recurrent neural network (RNN) for the purpose of "learning a learning algorithm"
We shed light on the internal working mechanisms of these agents by reformulating the meta-RL problem using the Partially Observable Markov Decision Process (POMDP) framework.
arXiv Detail & Related papers (2021-04-29T20:34:39Z) - Scalable Multi-Agent Inverse Reinforcement Learning via
Actor-Attention-Critic [54.2180984002807]
Multi-agent adversarial inverse reinforcement learning (MA-AIRL) is a recent approach that applies single-agent AIRL to multi-agent problems.
We propose a multi-agent inverse RL algorithm that is more sample-efficient and scalable than previous works.
arXiv Detail & Related papers (2020-02-24T20:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.