Bayesian calibration of differentiable agent-based models
- URL: http://arxiv.org/abs/2305.15340v1
- Date: Wed, 24 May 2023 16:52:32 GMT
- Title: Bayesian calibration of differentiable agent-based models
- Authors: Arnau Quera-Bofarull, Ayush Chopra, Anisoara Calinescu, Michael
Wooldridge, Joel Dyer
- Abstract summary: We discuss how generalised variational inference procedures may be employed to provide misspecification-robust Bayesian parameter inferences.
We demonstrate with experiments on a differentiable ABM of the COVID-19 pandemic that our approach can result in accurate inferences.
- Score: 3.629865579485447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agent-based modelling (ABMing) is a powerful and intuitive approach to
modelling complex systems; however, the intractability of ABMs' likelihood
functions and the non-differentiability of the mathematical operations
comprising these models present a challenge to their use in the real world.
These difficulties have in turn generated research on approximate Bayesian
inference methods for ABMs and on constructing differentiable approximations to
arbitrary ABMs, but little work has been directed towards designing approximate
Bayesian inference techniques for the specific case of differentiable ABMs. In
this work, we aim to address this gap and discuss how generalised variational
inference procedures may be employed to provide misspecification-robust
Bayesian parameter inferences for differentiable ABMs. We demonstrate with
experiments on a differentiable ABM of the COVID-19 pandemic that our approach
can result in accurate inferences, and discuss avenues for future work.
Related papers
- Unifying and extending Diffusion Models through PDEs for solving Inverse Problems [3.1225172236361165]
Diffusion models have emerged as powerful generative tools with applications in computer vision and scientific machine learning (SciML)
Traditionally, these models have been derived using principles of variational inference, denoising, statistical signal processing, and differential equations.
In this study we derive diffusion models using ideas from linear partial differential equations and demonstrate that this approach has several benefits.
arXiv Detail & Related papers (2025-04-10T04:07:36Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Learning Latent Space Dynamics with Model-Form Uncertainties: A Stochastic Reduced-Order Modeling Approach [0.0]
This paper presents a probabilistic approach to represent and quantify model-form uncertainties in the reduced-order modeling of complex systems.
The proposed method captures these uncertainties by expanding the approximation space through the randomization of the projection matrix.
The efficacy of the approach is assessed on canonical problems in fluid mechanics by identifying and quantifying the impact of model-form uncertainties on the inferred operators.
arXiv Detail & Related papers (2024-08-30T19:25:28Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Variational Inference of Parameters in Opinion Dynamics Models [9.51311391391997]
This work uses variational inference to estimate the parameters of an opinion dynamics ABM.
We transform the inference process into an optimization problem suitable for automatic differentiation.
Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods.
arXiv Detail & Related papers (2024-03-08T14:45:18Z) - Bayesian Model Selection via Mean-Field Variational Approximation [10.433170683584994]
We study the non-asymptotic properties of mean-field (MF) inference under the Bayesian framework.
We show a Bernstein von-Mises (BvM) theorem for the variational distribution from MF under possible model mis-specification.
arXiv Detail & Related papers (2023-12-17T04:48:25Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Cross Feature Selection to Eliminate Spurious Interactions and Single
Feature Dominance Explainable Boosting Machines [0.0]
Interpretability is essential for legal, ethical, and practical reasons.
High-performance models can suffer from spurious interactions with redundant features and single-feature dominance.
In this paper, we explore novel approaches to address these issues by utilizing alternate Cross-feature selection, ensemble features and model configuration alteration techniques.
arXiv Detail & Related papers (2023-07-17T13:47:41Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Surrogate Likelihoods for Variational Annealed Importance Sampling [11.144915453864854]
We introduce a surrogate likelihood that can be learned jointly with other variational parameters.
We show that our method performs well in practice and that it is well-suited for black-box inference in probabilistic programming frameworks.
arXiv Detail & Related papers (2021-12-22T19:49:45Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.