DeepHAM: A Global Solution Method for Heterogeneous Agent Models with
Aggregate Shocks
- URL: http://arxiv.org/abs/2112.14377v1
- Date: Wed, 29 Dec 2021 03:09:19 GMT
- Title: DeepHAM: A Global Solution Method for Heterogeneous Agent Models with
Aggregate Shocks
- Authors: Jiequn Han, Yucheng Yang, Weinan E
- Abstract summary: We propose an efficient, reliable, and interpretable global solution method, $textitDeep learning-based algorithm for Heterogeneous Agent Models, DeepHAM$.
- Score: 9.088303226909277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an efficient, reliable, and interpretable global solution method,
$\textit{Deep learning-based algorithm for Heterogeneous Agent Models,
DeepHAM}$, for solving high dimensional heterogeneous agent models with
aggregate shocks. The state distribution is approximately represented by a set
of optimal generalized moments. Deep neural networks are used to approximate
the value and policy functions, and the objective is optimized over directly
simulated paths. Besides being an accurate global solver, this method has three
additional features. First, it is computationally efficient for solving complex
heterogeneous agent models, and it does not suffer from the curse of
dimensionality. Second, it provides a general and interpretable representation
of the distribution over individual states; and this is important for
addressing the classical question of whether and how heterogeneity matters in
macroeconomics. Third, it solves the constrained efficiency problem as easily
as the competitive equilibrium, and this opens up new possibilities for
studying optimal monetary and fiscal policies in heterogeneous agent models
with aggregate shocks.
Related papers
- Global Solutions to Master Equations for Continuous Time Heterogeneous Agent Macroeconomic Models [2.133330089821556]
We approximate the agent distribution so that equilibrium in the economy can be characterized by a non-linear partial differential equation.
We represent the value function using a neural network and train it to solve the differential equation using deep learning tools.
The main advantage of this technique is that it allows us to find global solutions to high dimensional, non-linear problems.
arXiv Detail & Related papers (2024-06-19T17:42:53Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Scalable Decentralized Algorithms for Online Personalized Mean Estimation [12.002609934938224]
This study focuses on a simplified version of the overarching problem, where each agent collects samples from a real-valued distribution over time to estimate its mean.
We introduce two collaborative mean estimation algorithms: one draws inspiration from belief propagation, while the other employs a consensus-based approach.
arXiv Detail & Related papers (2024-02-20T08:30:46Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - On the Model-Misspecification in Reinforcement Learning [9.864462523050843]
We present a unified theoretical framework for addressing model misspecification in reinforcement learning.
We show that value-based and model-based methods can achieve robustness under local misspecification error bounds.
We also propose an algorithmic framework that can achieve the same order of regret bound without prior knowledge of $zeta$.
arXiv Detail & Related papers (2023-06-19T04:31:59Z) - Sample Complexity of Robust Reinforcement Learning with a Generative
Model [0.0]
We propose a model-based reinforcement learning (RL) algorithm for learning an $epsilon$-optimal robust policy.
We consider three different forms of uncertainty sets, characterized by the total variation distance, chi-square divergence, and KL divergence.
In addition to the sample complexity results, we also present a formal analytical argument on the benefit of using robust policies.
arXiv Detail & Related papers (2021-12-02T18:55:51Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.