From Generalisation Error to Transportation-cost Inequalities and Back
- URL: http://arxiv.org/abs/2202.03956v1
- Date: Tue, 8 Feb 2022 16:07:08 GMT
- Title: From Generalisation Error to Transportation-cost Inequalities and Back
- Authors: Amedeo Roberto Esposito, Michael Gastpar
- Abstract summary: We connect the problem of bounding the expected generalisation error with transportation-cost inequalities.
We go beyond Kullback-Leibler Divergences/Mutual Information and sub-Gaussian measures.
- Score: 23.498743847708408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we connect the problem of bounding the expected generalisation
error with transportation-cost inequalities. Exposing the underlying pattern
behind both approaches we are able to generalise them and go beyond
Kullback-Leibler Divergences/Mutual Information and sub-Gaussian measures. In
particular, we are able to provide a result showing the equivalence between two
families of inequalities: one involving functionals and one involving measures.
This result generalises the one proposed by Bobkov and G\"otze that connects
transportation-cost inequalities with concentration of measure. Moreover, it
allows us to recover all standard generalisation error bounds involving mutual
information and to introduce new, more general bounds, that involve arbitrary
divergence measures.
Related papers
- A unified framework for information-theoretic generalization bounds [8.04975023021212]
This paper presents a general methodology for deriving information-theoretic generalization bounds for learning algorithms.
The main technical tool is a probabilistic decorrelation lemma based on a change of measure and a relaxation of Young's inequality in $L_psi_p$ Orlicz spaces.
arXiv Detail & Related papers (2023-05-18T15:36:20Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Chained Generalisation Bounds [26.043342234937747]
We derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.
We establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts.
arXiv Detail & Related papers (2022-03-02T09:34:36Z) - Tighter Expected Generalization Error Bounds via Convexity of
Information Measures [8.359770027722275]
Generalization error bounds are essential to understanding machine learning algorithms.
This paper presents novel expected generalization error upper bounds based on the average joint distribution between the output hypothesis and each input training sample.
arXiv Detail & Related papers (2022-02-24T15:36:09Z) - Generalization Bounds via Convex Analysis [12.411844611718958]
We show that it is possible to replace the mutual information by any strongly convex function of the joint input-output distribution.
Examples include bounds stated in terms of $p$-norm divergences and the Wasserstein-2 distance.
arXiv Detail & Related papers (2022-02-10T12:30:45Z) - R\'enyi divergence inequalities via interpolation, with applications to
generalised entropic uncertainty relations [91.3755431537592]
We investigate quantum R'enyi entropic quantities, specifically those derived from'sandwiched' divergence.
We present R'enyi mutual information decomposition rules, a new approach to the R'enyi conditional entropy tripartite chain rules and a more general bipartite comparison.
arXiv Detail & Related papers (2021-06-19T04:06:23Z) - Invariance Principle Meets Information Bottleneck for
Out-of-Distribution Generalization [77.24152933825238]
We show that for linear classification tasks we need stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible.
We prove that a form of the information bottleneck constraint along with invariance helps address key failures when invariant features capture all the information about the label and also retains the existing success when they do not.
arXiv Detail & Related papers (2021-06-11T20:42:27Z) - Dimension Free Generalization Bounds for Non Linear Metric Learning [61.193693608166114]
We provide uniform generalization bounds for two regimes -- the sparse regime, and a non-sparse regime.
We show that by relying on a different, new property of the solutions, it is still possible to provide dimension free generalization guarantees.
arXiv Detail & Related papers (2021-02-07T14:47:00Z) - In Search of Robust Measures of Generalization [79.75709926309703]
We develop bounds on generalization error, optimization error, and excess risk.
When evaluated empirically, most of these bounds are numerically vacuous.
We argue that generalization measures should instead be evaluated within the framework of distributional robustness.
arXiv Detail & Related papers (2020-10-22T17:54:25Z) - Relative Deviation Margin Bounds [55.22251993239944]
We give two types of learning bounds, both distribution-dependent and valid for general families, in terms of the Rademacher complexity.
We derive distribution-dependent generalization bounds for unbounded loss functions under the assumption of a finite moment.
arXiv Detail & Related papers (2020-06-26T12:37:17Z) - Robust Generalization via $\alpha$-Mutual Information [24.40306100502023]
bounds connecting two probability measures of the same event using R'enyi $alpha$-Divergences and Sibson's $alpha$-Mutual Information.
Results have broad applications, from bounding the generalization error of learning algorithms to the more general framework of adaptive data analysis.
arXiv Detail & Related papers (2020-01-14T11:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.