Novel Change of Measure Inequalities with Applications to PAC-Bayesian
Bounds and Monte Carlo Estimation
- URL: http://arxiv.org/abs/2002.10678v2
- Date: Thu, 25 Jun 2020 10:11:04 GMT
- Title: Novel Change of Measure Inequalities with Applications to PAC-Bayesian
Bounds and Monte Carlo Estimation
- Authors: Yuki Ohnishi and Jean Honorio
- Abstract summary: We show how variational representation for $f$-divergences leads to novel change of measure inequalities.
We also present a multiplicative change of measure inequality for $alpha$-divergences and a generalized version of Hammersley-Chapman-Robbins inequality.
- Score: 29.919144859026016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce several novel change of measure inequalities for two families of
divergences: $f$-divergences and $\alpha$-divergences. We show how the
variational representation for $f$-divergences leads to novel change of measure
inequalities. We also present a multiplicative change of measure inequality for
$\alpha$-divergences and a generalized version of Hammersley-Chapman-Robbins
inequality. Finally, we present several applications of our change of measure
inequalities, including PAC-Bayesian bounds for various classes of losses and
non-asymptotic intervals for Monte Carlo estimates.
Related papers
- Variance-Dependent Regret Bounds for Non-stationary Linear Bandits [52.872628573907434]
We propose algorithms that utilize the variance of the reward distribution as well as the $B_K$, and show that they can achieve tighter regret upper bounds.
We introduce two novel algorithms: Restarted Weighted$textOFUL+$ and Restarted $textSAVE+$.
Notably, when the total variance $V_K$ is much smaller than $K$, our algorithms outperform previous state-of-the-art results on non-stationary linear bandits under different settings.
arXiv Detail & Related papers (2024-03-15T23:36:55Z) - Log-majorization and matrix norm inequalities with application to quantum information [2.7195102129095003]
We show an extension of Araki's log-majorization and apply it to the $alpha$-$z$-R'enyi divergence in quantum information.
The paper includes an appendix to correct the proof of the author's old result on the equality case in the norm inequality for the weighted geometric mean.
arXiv Detail & Related papers (2024-02-25T11:33:46Z) - Tighter monogamy inequalities of multiqubit entanglement [3.9318191265352196]
Multipartite entanglement holds great importance in quantum information processing.
We provide two new monogamy inequalities based on the $beta$th power of concurrence and negativity.
arXiv Detail & Related papers (2023-12-15T03:00:06Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey [119.11852898082967]
This paper is a survey of methods for solving smooth (strongly) monotone variational inequalities.
To begin with, we give the foundation from which the methods eventually evolved.
Then we review methods for the general formulation, and look at the finite sum setup.
arXiv Detail & Related papers (2022-08-29T13:39:30Z) - Information Processing Equalities and the Information-Risk Bridge [10.451984251615512]
We introduce two new classes of measures of information for statistical experiments.
We derive a simple geometrical relationship between measures of information and the Bayes risk of a statistical decision problem.
arXiv Detail & Related papers (2022-07-25T08:54:36Z) - Split-kl and PAC-Bayes-split-kl Inequalities [15.63537071742102]
We name a split-kl inequality, which combines the power of the kl inequality with the ability to exploit low variance.
For Bernoulli random variables the kl inequality is tighter than the Empirical Bernstein, for random variables taking values inside a bounded interval the Empirical Bernstein inequality is tighter than the kl.
We discuss an application of the split-kl inequality to bounding excess losses.
arXiv Detail & Related papers (2022-06-01T18:42:02Z) - On change of measure inequalities for $f$-divergences [5.799808780731661]
Strategy relies on combining the Legendre transform of $f$-divergences and the Young-Fenchel inequality.
We derive new PAC-Bayesian generalisation bounds with a complexity involving $f$-divergences.
We instantiate our results for the most popular $f$-divergences.
arXiv Detail & Related papers (2022-02-11T11:53:28Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Domain Adaptation: Learning Bounds and Algorithms [80.85426994513541]
We introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions.
We derive novel generalization bounds for domain adaptation for a wide family of loss functions.
We also present a series of novel adaptation bounds for large classes of regularization-based algorithms.
arXiv Detail & Related papers (2009-02-19T18:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.