Learning Macroeconomic Policies based on Microfoundations: A Stackelberg Mean Field Game Approach
- URL: http://arxiv.org/abs/2403.12093v3
- Date: Thu, 17 Oct 2024 08:08:54 GMT
- Title: Learning Macroeconomic Policies based on Microfoundations: A Stackelberg Mean Field Game Approach
- Authors: Qirui Mi, Zhiyu Zhao, Siyu Xia, Yan Song, Jun Wang, Haifeng Zhang,
- Abstract summary: This paper introduces a Stackelberg Mean Field Game (SMFG) approach that models macroeconomic policymaking based on microfoundations.
This approach treats large-scale micro-agents as a population, to optimize macroeconomic policies by learning the dynamic response of this micro-population.
- Score: 13.92769744834052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Lucas critique emphasizes the importance of considering microfoundations, how micro-agents (i.e., households) respond to policy changes, in macroeconomic policymaking. However, due to the vast scale and complex dynamics among micro-agents, predicting microfoundations is challenging. Consequently, this paper introduces a Stackelberg Mean Field Game (SMFG) approach that models macroeconomic policymaking based on microfoundations, with the government as the leader and micro-agents as dynamic followers. This approach treats large-scale micro-agents as a population, to optimize macroeconomic policies by learning the dynamic response of this micro-population. Our experimental results indicate that the SMFG approach outperforms real-world macroeconomic policies, existing AI-based and economic methods, enabling the learned macroeconomic policy to achieve the highest performance while guiding large-scale micro-agents toward maximal social welfare. Additionally, when extended to real-world scenarios, households that do not adopt the SMFG policy experience lower utility and wealth than adopters, thereby increasing the attractiveness of our policy. In summary, this paper contributes to the field of AI for economics by offering an effective tool for modeling and solving macroeconomic policymaking issues.
Related papers
- STEER-ME: Assessing the Microeconomic Reasoning of Large Language Models [8.60556939977361]
We develop a benchmark for evaluating large language models (LLM) for microeconomic reasoning.
We focus on the logic of supply and demand, each grounded in up to $10$ domains, $5$ perspectives, and $3$ types.
We demonstrate the usefulness of our benchmark via a case study on $27$ LLMs, ranging from small open-source models to the current state of the art.
arXiv Detail & Related papers (2025-02-18T18:42:09Z) - A Multi-agent Market Model Can Explain the Impact of AI Traders in Financial Markets -- A New Microfoundations of GARCH model [3.655221783356311]
We propose a multi-agent market model to derive the microfoundations of the GARCH model, incorporating three types of agents: noise traders, fundamental traders, and AI traders.
We validate this model through multi-agent simulations, confirming its ability to reproduce the stylized facts of financial markets.
arXiv Detail & Related papers (2024-09-19T07:14:13Z) - Evaluating Real-World Robot Manipulation Policies in Simulation [91.55267186958892]
Control and visual disparities between real and simulated environments are key challenges for reliable simulated evaluation.
We propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments.
We create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups.
arXiv Detail & Related papers (2024-05-09T17:30:16Z) - Simulating the Economic Impact of Rationality through Reinforcement Learning and Agent-Based Modelling [1.7546137756031712]
We leverage multi-agent reinforcement learning (RL) to expand the capabilities of agent-based models (ABMs)
We show that RL agents spontaneously learn three distinct strategies for maximising profits, with the optimal strategy depending on the level of market competition and rationality.
We also find that RL agents with independent policies, and without the ability to communicate with each other, spontaneously learn to segregate into different strategic groups, thus increasing market power and overall profits.
arXiv Detail & Related papers (2024-05-03T15:08:25Z) - Finding Regularized Competitive Equilibria of Heterogeneous Agent
Macroeconomic Models with Reinforcement Learning [151.03738099494765]
We study a heterogeneous agent macroeconomic model with an infinite number of households and firms competing in a labor market.
We propose a data-driven reinforcement learning framework that finds the regularized competitive equilibrium of the model.
arXiv Detail & Related papers (2023-02-24T17:16:27Z) - Latent State Marginalization as a Low-cost Approach for Improving
Exploration [79.12247903178934]
We propose the adoption of latent variable policies within the MaxEnt framework.
We show that latent variable policies naturally emerges under the use of world models with a latent belief state.
We experimentally validate our method on continuous control tasks, showing that effective marginalization can lead to better exploration and more robust training.
arXiv Detail & Related papers (2022-10-03T15:09:12Z) - Weak Supervision in Analysis of News: Application to Economic Policy
Uncertainty [0.0]
Our work focuses on studying the potential of textual data, in particular news pieces, for measuring economic policy uncertainty (EPU)
Economic policy uncertainty is defined as the public's inability to predict the outcomes of their decisions under new policies and future economic fundamentals.
Our work proposes a machine learning based solution involving weak supervision to classify news articles with regards to economic policy uncertainty.
arXiv Detail & Related papers (2022-08-10T09:08:29Z) - Building a Foundation for Data-Driven, Interpretable, and Robust Policy
Design using the AI Economist [67.08543240320756]
We show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning and data-driven simulations.
We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes.
arXiv Detail & Related papers (2021-08-06T01:30:41Z) - The AI Economist: Optimal Economic Policy Design via Two-level Deep
Reinforcement Learning [126.37520136341094]
We show that machine-learning-based economic simulation is a powerful policy and mechanism design framework.
The AI Economist is a two-level, deep RL framework that trains both agents and a social planner who co-adapt.
In simple one-step economies, the AI Economist recovers the optimal tax policy of economic theory.
arXiv Detail & Related papers (2021-08-05T17:42:35Z) - ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in
Multi-Agent Simulations [110.72725220033983]
Epsilon-Robust Multi-Agent Simulation (ERMAS) is a framework for learning AI policies that are robust to such multiagent sim-to-real gaps.
ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
In particular, ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
arXiv Detail & Related papers (2021-06-10T04:32:20Z) - MPC-based Reinforcement Learning for Economic Problems with Application
to Battery Storage [0.0]
We focus on policy approximations based on Model Predictive Control (MPC)
We observe that the policy gradient method can struggle to produce meaningful steps in the policy parameters when the policy has a (nearly) bang-bang structure.
We propose a homotopy strategy based on the interior-point method, providing a relaxation of the policy during the learning.
arXiv Detail & Related papers (2021-04-06T10:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.