Bayesian Renormalization
- URL: http://arxiv.org/abs/2305.10491v2
- Date: Sat, 27 May 2023 18:54:26 GMT
- Title: Bayesian Renormalization
- Authors: David S. Berman, Marc S. Klinger and Alexander G. Stapleton
- Abstract summary: We present a fully information theoretic approach to renormalization inspired by Bayesian statistical inference.
The main insight of Bayesian Renormalization is that the Fisher metric defines a correlation length that plays the role of an emergent RG scale.
We provide insight into how the Bayesian Renormalization scheme relates to existing methods for data compression and data generation.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this note we present a fully information theoretic approach to
renormalization inspired by Bayesian statistical inference, which we refer to
as Bayesian Renormalization. The main insight of Bayesian Renormalization is
that the Fisher metric defines a correlation length that plays the role of an
emergent RG scale quantifying the distinguishability between nearby points in
the space of probability distributions. This RG scale can be interpreted as a
proxy for the maximum number of unique observations that can be made about a
given system during a statistical inference experiment. The role of the
Bayesian Renormalization scheme is subsequently to prepare an effective model
for a given system up to a precision which is bounded by the aforementioned
scale. In applications of Bayesian Renormalization to physical systems, the
emergent information theoretic scale is naturally identified with the maximum
energy that can be probed by current experimental apparatus, and thus Bayesian
Renormalization coincides with ordinary renormalization. However, Bayesian
Renormalization is sufficiently general to apply even in circumstances in which
an immediate physical scale is absent, and thus provides an ideal approach to
renormalization in data science contexts. To this end, we provide insight into
how the Bayesian Renormalization scheme relates to existing methods for data
compression and data generation such as the information bottleneck and the
diffusion learning paradigm.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Generalized Laplace Approximation [23.185126261153236]
We introduce a unified theoretical framework to attribute Bayesian inconsistency to model misspecification and inadequate priors.
We propose the generalized Laplace approximation, which involves a simple adjustment to the Hessian matrix of the regularized loss function.
We assess the performance and properties of the generalized Laplace approximation on state-of-the-art neural networks and real-world datasets.
arXiv Detail & Related papers (2024-05-22T11:11:42Z) - The Inverse of Exact Renormalization Group Flows as Statistical Inference [0.0]
We build on the view of the Exact Renormalization Group (ERG) as an instantiation of Optimal Transport.
We provide a new information theoretic perspective for understanding the ERG through the intermediary of Bayesian Statistical Inference.
arXiv Detail & Related papers (2022-12-21T21:38:34Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Reliable amortized variational inference with physics-based latent
distribution correction [0.4588028371034407]
A neural network is trained to approximate the posterior distribution over existing pairs of model and data.
The accuracy of this approach relies on the availability of high-fidelity training data.
We show that our correction step improves the robustness of amortized variational inference with respect to changes in number of source experiments, noise variance, and shifts in the prior distribution.
arXiv Detail & Related papers (2022-07-24T02:38:54Z) - Generalised Bayesian Inference for Discrete Intractable Likelihood [9.331721990371769]
This paper develops a novel generalised Bayesian inference procedure suitable for discrete intractable likelihood.
Inspired by recent methodological advances for continuous data, the main idea is to update beliefs about model parameters using a discrete Fisher divergence.
The result is a generalised posterior that can be sampled from using standard computational tools, such as Markov Monte Carlo.
arXiv Detail & Related papers (2022-06-16T19:36:17Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.