PAC-Bayesian Bound for the Conditional Value at Risk
- URL: http://arxiv.org/abs/2006.14763v1
- Date: Fri, 26 Jun 2020 02:55:24 GMT
- Title: PAC-Bayesian Bound for the Conditional Value at Risk
- Authors: Zakaria Mhammedi, Benjamin Guedj, Robert C. Williamson
- Abstract summary: Conditional Value at Risk (CVaR) is a family of "coherent risk measures" which generalize the traditional mathematical expectation.
This paper presents a generalization bound for learning algorithms that minimize the CVaR of the empirical loss.
- Score: 20.94565887795792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional Value at Risk (CVaR) is a family of "coherent risk measures"
which generalize the traditional mathematical expectation. Widely used in
mathematical finance, it is garnering increasing interest in machine learning,
e.g., as an alternate approach to regularization, and as a means for ensuring
fairness. This paper presents a generalization bound for learning algorithms
that minimize the CVaR of the empirical loss. The bound is of PAC-Bayesian type
and is guaranteed to be small when the empirical CVaR is small. We achieve this
by reducing the problem of estimating CVaR to that of merely estimating an
expectation. This then enables us, as a by-product, to obtain concentration
inequalities for CVaR even when the random variable in question is unbounded.
Related papers
- Misclassification excess risk bounds for PAC-Bayesian classification via convexified loss [0.0]
PAC-Bayesian bounds are a valuable tool for designing new learning algorithms in machine learning.
In this paper we show how to leverage relative bounds in expectation rather than relying on PAC-Bayesian bounds in terms of generalization.
arXiv Detail & Related papers (2024-08-16T11:41:06Z) - Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - Provably Efficient CVaR RL in Low-rank MDPs [58.58570425202862]
We study risk-sensitive Reinforcement Learning (RL)
We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to balance interplay between exploration, exploitation, and representation learning in CVaR RL.
We prove that our algorithm achieves a sample complexity of $epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.
arXiv Detail & Related papers (2023-11-20T17:44:40Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Risk-Constrained Thompson Sampling for CVaR Bandits [82.47796318548306]
We consider a popular risk measure in quantitative finance known as the Conditional Value at Risk (CVaR)
We explore the performance of a Thompson Sampling-based algorithm CVaR-TS under this risk measure.
arXiv Detail & Related papers (2020-11-16T15:53:22Z) - Sharp Statistical Guarantees for Adversarially Robust Gaussian
Classification [54.22421582955454]
We provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification.
Results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting.
arXiv Detail & Related papers (2020-06-29T21:06:52Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Learning with CVaR-based feedback under potentially heavy tails [8.572654816871873]
We study learning algorithms that seek to minimize the conditional value-at-risk (CVaR)
We first study a general-purpose estimator of CVaR for potentially heavy-tailed random variables.
We then derive a new learning algorithm which robustly chooses among candidates produced by gradient-driven sub-processes.
arXiv Detail & Related papers (2020-06-03T01:08:29Z) - Statistical Learning with Conditional Value at Risk [35.4968603057034]
We propose a risk-averse statistical learning framework wherein the performance of a learning algorithm is evaluated by the conditional value-at-risk (CVaR) of losses rather than the expected loss.
arXiv Detail & Related papers (2020-02-14T00:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.