Solving Heterogeneous Agent Models with Physics-informed Neural Networks
- URL: http://arxiv.org/abs/2511.20283v1
- Date: Tue, 25 Nov 2025 13:11:03 GMT
- Title: Solving Heterogeneous Agent Models with Physics-informed Neural Networks
- Authors: Marta Grzeskiewicz,
- Abstract summary: This paper introduces the ABH-PINN solver, an approach based on Physics-Informed Neural Networks (PINNs)<n> PINNs embed the Hamilton-Jacobi-Bellman and Kolmogorov Forward equations directly into the neural network training objective.<n>Preliminary results show that the PINN-based approach is able to obtain economically valid results matching the established finite-difference solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding household behaviour is essential for modelling macroeconomic dynamics and designing effective policy. While heterogeneous agent models offer a more realistic alternative to representative agent frameworks, their implementation poses significant computational challenges, particularly in continuous time. The Aiyagari-Bewley-Huggett (ABH) framework, recast as a system of partial differential equations, typically relies on grid-based solvers that suffer from the curse of dimensionality, high computational cost, and numerical inaccuracies. This paper introduces the ABH-PINN solver, an approach based on Physics-Informed Neural Networks (PINNs), which embeds the Hamilton-Jacobi-Bellman and Kolmogorov Forward equations directly into the neural network training objective. By replacing grid-based approximation with mesh-free, differentiable function learning, the ABH-PINN solver benefits from the advantages of PINNs of improved scalability, smoother solutions, and computational efficiency. Preliminary results show that the PINN-based approach is able to obtain economically valid results matching the established finite-difference solvers.
Related papers
- Multi-Fidelity Physics-Informed Neural Networks with Bayesian Uncertainty Quantification and Adaptive Residual Learning for Efficient Solution of Parametric Partial Differential Equations [0.0]
MF-BPINN is a novel multi-fidelity framework for solving partial differential equations.<n>We introduce an adaptive residual network with learnable gating mechanisms.<n>We also develop a rigorous Bayesian framework employing Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2026-02-01T12:01:31Z) - Efficient reformulations of ReLU deep neural networks for surrogate modelling in power system optimisation [0.9612977347324178]
Decarbonisation of distributed power systems is driving an increasing reliance on energy resources.<n> complex and nonlinear interactions are difficult to capture in optimisation.<n>This paper proposes a reformulation for a class of convexified ReLUs (DNNLPs)<n>The proposed reformulation is benchmarked against state-of-the-art alternatives.
arXiv Detail & Related papers (2026-01-21T05:40:27Z) - DBAW-PIKAN: Dynamic Balance Adaptive Weight Kolmogorov-Arnold Neural Network for Solving Partial Differential Equations [11.087203453701568]
Physics-informed neural networks (PINNs) have led to significant advancements in scientific computing.<n> PINNs encounter persistent and severe challenges related to stiffness in gradient flow and spectral bias.<n>This paper proposes a Dynamic Balancing Adaptive Weighting Physics-Informed Kolmogorov-Arnold Network (DBAW-PIKAN)
arXiv Detail & Related papers (2025-12-25T06:47:14Z) - Hephaestus: Mixture Generative Modeling with Energy Guidance for Large-scale QoS Degradation [44.97875113025023]
We study the Quality of Service Degradation (QoSD) problem, in which an adversary perturbs edge weights to degrade network performance.<n>No prior model directly tackles the RefineD problem under nonlinear edge-weight functions.<n>This work proposes PIMMA, a self-reinforcing framework that synthesizes feasible solutions in latent space.
arXiv Detail & Related papers (2025-10-19T22:48:35Z) - Mask-PINNs: Mitigating Internal Covariate Shift in Physics-Informed Neural Networks [1.2667864219315372]
PINNs have emerged as a powerful framework for solving partial differential equations.<n>We propose Mask-PINNs, a learnable mask function to regulate feature distributions.<n>Our results show consistent improvements in prediction accuracy, convergence stability, and robustness.
arXiv Detail & Related papers (2025-05-09T15:38:52Z) - Finite Element Neural Network Interpolation. Part I: Interpretable and Adaptive Discretization for Solving PDEs [44.99833362998488]
We present a sparse neural network architecture extending previous work on Embedded Finite Element Neural Networks (EFENN)<n>Due to their mesh-based structure, EFENN requires significantly fewer trainable parameters than fully connected neural networks.<n>Our FENNI framework, within the EFENN framework, brings improvements to the HiDeNN approach.
arXiv Detail & Related papers (2024-12-07T18:31:17Z) - Adaptive Training of Grid-Dependent Physics-Informed Kolmogorov-Arnold Networks [4.216184112447278]
Physics-Informed Neural Networks (PINNs) have emerged as a robust framework for solving Partial Differential Equations (PDEs)
We present a fast JAX-based implementation of grid-dependent Physics-Informed Kolmogorov-Arnold Networks (PIKANs) for solving PDEs.
We demonstrate that the adaptive features significantly enhance solution accuracy, decreasing the L2 error relative to the reference solution by up to 43.02%.
arXiv Detail & Related papers (2024-07-24T19:55:08Z) - Burgers' pinns with implicit euler transfer learning [0.0]
The Burgers equation is a well-established test case in the computational modeling of several phenomena.
We present the application of Physics-Informed Neural Networks (PINNs) with an implicit Euler transfer learning approach to solve the Burgers equation.
arXiv Detail & Related papers (2023-10-23T20:15:45Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - AttNS: Attention-Inspired Numerical Solving For Limited Data Scenarios [51.94807626839365]
We propose the attention-inspired numerical solver (AttNS) to solve differential equations due to limited data.<n>AttNS is inspired by the effectiveness of attention modules in Residual Neural Networks (ResNet) in enhancing model generalization and robustness.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.