A note on generalization bounds for losses with finite moments
- URL: http://arxiv.org/abs/2403.16681v1
- Date: Mon, 25 Mar 2024 12:15:55 GMT
- Title: A note on generalization bounds for losses with finite moments
- Authors: Borja Rodríguez-Gálvez, Omar Rivasplata, Ragnar Thobaben, Mikael Skoglund,
- Abstract summary: The paper derives a high-probability PAC-Bayes bound for losses with a bounded variance.
It extends results to guarantees in expectation and single-draw PAC-Bayes.
- Score: 28.102352176005514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the truncation method from Alquier [1] to derive high-probability PAC-Bayes bounds for unbounded losses with heavy tails. Assuming that the $p$-th moment is bounded, the resulting bounds interpolate between a slow rate $1 / \sqrt{n}$ when $p=2$, and a fast rate $1 / n$ when $p \to \infty$ and the loss is essentially bounded. Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance. This bound has an exponentially better dependence on the confidence parameter and the dependency measure than previous bounds in the literature. Finally, the paper extends all results to guarantees in expectation and single-draw PAC-Bayes. In order to so, it obtains analogues of the PAC-Bayes fast rate bound for bounded losses from [2] in these settings.
Related papers
- Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling [73.5602474095954]
We study the non-asymptotic performance of approximation schemes with delayed updates under Markovian sampling.
Our theoretical findings shed light on the finite-time effects of delays for a broad class of algorithms.
arXiv Detail & Related papers (2024-02-19T03:08:02Z) - Better-than-KL PAC-Bayes Bounds [23.87003743389573]
We show that it is possible to achieve a strictly tighter bound with a novel and better-than-KL divergence.
Our result is first-of-its-kind in that existing PAC-Bayes bounds with non-KL divergences are not known to be strictly better than KL.
arXiv Detail & Related papers (2024-02-14T14:33:39Z) - PAC-Bayes-Chernoff bounds for unbounded losses [9.987130158432755]
We introduce a new PAC-Bayes oracle bound for unbounded losses that extends Cram'er-Chernoff bounds to the PAC-Bayesian setting.
Our approach naturally leverages properties of Cram'er-Chernoff bounds, such as exact optimization of the free parameter in many PAC-Bayes bounds.
arXiv Detail & Related papers (2024-01-02T10:58:54Z) - More PAC-Bayes bounds: From bounded losses, to losses with general tail behaviors, to anytime validity [27.87324770020133]
We present new high-probability PAC-Bayes bounds for different types of losses.
For losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values.
For losses with more general tail behaviors, we introduce two new parameter-free bounds.
arXiv Detail & Related papers (2023-06-21T12:13:46Z) - A unified recipe for deriving (time-uniform) PAC-Bayes bounds [31.921092049934654]
We present a unified framework for deriving PAC-Bayesian generalization bounds.
Our bounds are anytime-valid (i.e., time-uniform), meaning that they hold at all stopping times.
arXiv Detail & Related papers (2023-02-07T12:11:59Z) - PAC-Bayes Generalisation Bounds for Heavy-Tailed Losses through
Supermartingales [5.799808780731661]
We contribute PAC-Bayes generalisation bounds for heavy-tailed losses.
Our key technical contribution is exploiting an extention of Markov's inequality for supermartingales.
Our proof technique unifies and extends different PAC-Bayesian frameworks.
arXiv Detail & Related papers (2022-10-03T13:38:23Z) - Stochastic Shortest Path: Minimax, Parameter-Free and Towards
Horizon-Free Regret [144.6358229217845]
We study the problem of learning in the shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state.
We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus.
We prove that EB-SSP achieves the minimax regret rate $widetildeO(B_star sqrtS A K)$, where $K$ is the number of episodes, $S$ is the number of states, $A$
arXiv Detail & Related papers (2021-04-22T17:20:48Z) - Fast Rates for the Regret of Offline Reinforcement Learning [69.23654172273085]
We study the regret of reinforcement learning from offline data generated by a fixed behavior policy in an infinitehorizon discounted decision process (MDP)
We show that given any estimate for the optimal quality function $Q*$, the regret of the policy it defines converges at a rate given by the exponentiation of the $Q*$-estimate's pointwise convergence rate.
arXiv Detail & Related papers (2021-01-31T16:17:56Z) - PAC$^m$-Bayes: Narrowing the Empirical Risk Gap in the Misspecified
Bayesian Regime [75.19403612525811]
This work develops a multi-sample loss which can close the gap by spanning a trade-off between the two risks.
Empirical study demonstrates improvement to the predictive distribution.
arXiv Detail & Related papers (2020-10-19T16:08:34Z) - Relative Deviation Margin Bounds [55.22251993239944]
We give two types of learning bounds, both distribution-dependent and valid for general families, in terms of the Rademacher complexity.
We derive distribution-dependent generalization bounds for unbounded loss functions under the assumption of a finite moment.
arXiv Detail & Related papers (2020-06-26T12:37:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.