Tight Bounds for Jensen's Gap with Applications to Variational Inference
- URL: http://arxiv.org/abs/2502.03988v2
- Date: Sun, 09 Nov 2025 13:35:09 GMT
- Title: Tight Bounds for Jensen's Gap with Applications to Variational Inference
- Authors: Marcin Mazur, Tadeusz Dziarmaga, Piotr Kościelniak, Łukasz Struski,
- Abstract summary: We propose new, general bounds for Jensen's gap that accommodate a broad range of assumptions on both the function and the random variable.<n>We provide both analytical and empirical evidence for the performance of our method.
- Score: 3.6620310550061403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since its original formulation, Jensen's inequality has played a fundamental role across mathematics, statistics, and machine learning, with its probabilistic version highlighting the nonnegativity of the so-called Jensen's gap, i.e., the difference between the expectation of a convex function and the function at the expectation. Of particular importance is the case when the function is logarithmic, as this setting underpins many applications in variational inference, where the term variational gap is often used interchangeably. Recent research has focused on estimating the size of Jensen's gap and establishing tight lower and upper bounds under various assumptions on the underlying function and distribution, driven by practical challenges such as the intractability of log-likelihood in graphical models like variational autoencoders (VAEs). In this paper, we propose new, general bounds for Jensen's gap that accommodate a broad range of assumptions on both the function and the random variable, with special attention to exponential and logarithmic cases. We provide both analytical and empirical evidence for the performance of our method. Furthermore, we relate our bounds to the PAC-Bayes framework, providing new insights into generalization performance in probabilistic models.
Related papers
- Comparing Generative Models with the New Physics Learning Machine [0.0]
In large-scale and high-dimensional regimes, machine learning offers a set of tools to push beyond the limitations of standard statistical techniques.<n>We put this claim to the test by comparing a proposal from the high-energy physics literature, the New Physics Learning Machine, to perform a classification-based two-sample test.<n>We highlight the efficiency tradeoffs of the method and the computational costs that come from adopting learning-based approaches.
arXiv Detail & Related papers (2025-08-04T10:42:52Z) - Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.<n>Yet their widespread adoption poses challenges regarding data attribution and interpretability.<n>We develop an influence functions framework to address these challenges.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Divergences induced by dual subtractive and divisive normalizations of
exponential families and their convex deformations [7.070726553564701]
We show that skewed Bhattacharryya distances between probability densities of an exponential family amounts to skewed Jensen divergences induced by the cumulant function.
We then show how comparative convexity with respect to a pair of quasi-arithmetic means allows to deform both convex functions and their arguments.
arXiv Detail & Related papers (2023-12-20T08:59:05Z) - An analysis of Universal Differential Equations for data-driven
discovery of Ordinary Differential Equations [7.48176340790825]
We make a contribution by testing the UDE framework in the context of Ordinary Differential Equations (ODEs) discovery.
We highlight some of the issues arising when combining data-driven approaches and numerical solvers.
We believe that our analysis represents a significant contribution in investigating the capabilities and limitations of Physics-informed Machine Learning frameworks.
arXiv Detail & Related papers (2023-06-17T12:26:50Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - The Representation Jensen-Shannon Divergence [0.0]
Quantifying the difference between probability distributions is crucial in machine learning.
This work proposes the representation Jensen-Shannon divergence (RJSD), a novel measure inspired by the traditional Jensen-Shannon divergence.
Our results demonstrate RJSD's superiority in two-sample testing, distribution shift detection, and unsupervised domain adaptation.
arXiv Detail & Related papers (2023-05-25T19:44:36Z) - Demystifying Disagreement-on-the-Line in High Dimensions [34.103373453782744]
We develop a theoretical foundation for analyzing disagreement in high-dimensional random features regression.
Experiments on CIFAR-10-C, Tiny ImageNet-C, and Camelyon17 are consistent with our theory and support the universality of the theoretical findings.
arXiv Detail & Related papers (2023-01-31T02:31:18Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - A Unifying Framework for Some Directed Distances in Statistics [0.0]
Density-based directed distances -- particularly known as divergences -- are widely used in statistics.
We provide a general framework which covers in particular both the density-based and distribution-function-based divergence approaches.
We deduce new concepts of dependence between random variables, as alternatives to the celebrated mutual information.
arXiv Detail & Related papers (2022-03-02T04:24:13Z) - Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics
for Convex Losses in High-Dimension [25.711297863946193]
We develop a theory for the study of fluctuations in an ensemble of generalised linear models trained on different, but correlated, features.
We provide a complete description of the joint distribution of the empirical risk minimiser for generic convex loss and regularisation in the high-dimensional limit.
arXiv Detail & Related papers (2022-01-31T17:44:58Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.