Strong Duality Relations in Nonconvex Risk-Constrained Learning
- URL: http://arxiv.org/abs/2312.01110v1
- Date: Sat, 2 Dec 2023 11:21:00 GMT
- Title: Strong Duality Relations in Nonconvex Risk-Constrained Learning
- Authors: Dionysis Kalogerias, Spyridon Pougkakiotis
- Abstract summary: J. J. Uhl's convexity for general infinite dimensional Banach spaces is an extension of A. A. Lynovs.
We show that constrained classification and regression can be treated under a unifying lens, while certain restrictive assumptions enforced in the current literature are a new state of the art.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We establish strong duality relations for functional two-step compositional
risk-constrained learning problems with multiple nonconvex loss functions
and/or learning constraints, regardless of nonconvexity and under a minimal set
of technical assumptions. Our results in particular imply zero duality gaps
within the class of problems under study, both extending and improving on the
state of the art in (risk-neutral) constrained learning. More specifically, we
consider risk objectives/constraints which involve real-valued convex and
positively homogeneous risk measures admitting dual representations with
bounded risk envelopes, generalizing expectations and including popular
examples, such as the conditional value-at-risk (CVaR), the mean-absolute
deviation (MAD), and more generally all real-valued coherent risk measures on
integrable losses as special cases. Our results are based on recent advances in
risk-constrained nonconvex programming in infinite dimensions, which rely on a
remarkable new application of J. J. Uhl's convexity theorem, which is an
extension of A. A. Lyapunov's convexity theorem for general, infinite
dimensional Banach spaces. By specializing to the risk-neutral setting, we
demonstrate, for the first time, that constrained classification and regression
can be treated under a unifying lens, while dispensing certain restrictive
assumptions enforced in the current literature, yielding a new state-of-the-art
strong duality framework for nonconvex constrained learning.
Related papers
- Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
Models [9.65010022854885]
We show that adversarial risk is equivalent to the risk induced by a distributional adversarial attack under certain smoothness conditions.
To evaluate the generalization performance of the adversarial estimator, we study the adversarial excess risk.
arXiv Detail & Related papers (2023-09-02T00:51:19Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Supervised Learning with General Risk Functionals [28.918233583859134]
Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class.
We establish the first uniform convergence results for estimating the CDF of the loss distribution, yielding guarantees that hold simultaneously both over all H"older risk functionals and over all hypotheses.
arXiv Detail & Related papers (2022-06-27T22:11:05Z) - Mitigating multiple descents: A model-agnostic framework for risk
monotonization [84.6382406922369]
We develop a general framework for risk monotonization based on cross-validation.
We propose two data-driven methodologies, namely zero- and one-step, that are akin to bagging and boosting.
arXiv Detail & Related papers (2022-05-25T17:41:40Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - A Full Characterization of Excess Risk via Empirical Risk Landscape [8.797852602680445]
In this paper, we provide a unified analysis of the risk of the model trained by a proper algorithm with both smooth convex and non- loss functions.
arXiv Detail & Related papers (2020-12-04T08:24:50Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and
RKHS [20.663792705336483]
We study the risk performance of distributed empirical risk minimization (ERM) for general loss functions and hypothesis spaces.
First, we derive two tight risk bounds under certain basic assumptions on the hypothesis space, as well as the smoothness, Lipschitz continuity, strong convexity of the loss function.
Second, we develop a more general risk bound for distributed ERM without the restriction of strong convexity.
arXiv Detail & Related papers (2020-03-09T01:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.