On the Connection between $L_p$ and Risk Consistency and its
Implications on Regularized Kernel Methods
- URL: http://arxiv.org/abs/2303.15210v1
- Date: Mon, 27 Mar 2023 13:51:56 GMT
- Title: On the Connection between $L_p$ and Risk Consistency and its
Implications on Regularized Kernel Methods
- Authors: Hannes K\"ohler
- Abstract summary: The first aim of this paper is to establish the close connection between risk consistency and $L_p$-consistency for a considerably wider class of loss functions.
The attempt to transfer this connection to shifted loss functions surprisingly reveals that this shift does not reduce the assumptions needed on the underlying probability measure to the same extent as it does for many other results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a predictor's quality is often assessed by means of its risk, it is
natural to regard risk consistency as a desirable property of learning methods,
and many such methods have indeed been shown to be risk consistent. The first
aim of this paper is to establish the close connection between risk consistency
and $L_p$-consistency for a considerably wider class of loss functions than has
been done before. The attempt to transfer this connection to shifted loss
functions surprisingly reveals that this shift does not reduce the assumptions
needed on the underlying probability measure to the same extent as it does for
many other results. The results are applied to regularized kernel methods such
as support vector machines.
Related papers
- Regularization for Adversarial Robust Learning [18.46110328123008]
We develop a novel approach to adversarial training that integrates $phi$-divergence regularization into the distributionally robust risk function.
This regularization brings a notable improvement in computation compared with the original formulation.
We validate our proposed method in supervised learning, reinforcement learning, and contextual learning and showcase its state-of-the-art performance against various adversarial attacks.
arXiv Detail & Related papers (2024-08-19T03:15:41Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Uncertainty Quantification in Anomaly Detection with Cross-Conformal
$p$-Values [0.0]
This work introduces a novel framework for anomaly detection, termed cross-conformal anomaly detection.
We show that the derived methods for calculating cross-conformal $p$-values strike a practical compromise between statistical efficiency (full-conformal) and computational efficiency (split-conformal) for uncertainty-quantified anomaly detection.
arXiv Detail & Related papers (2024-02-26T08:22:40Z) - Beyond Expectations: Learning with Stochastic Dominance Made Practical [88.06211893690964]
dominance models risk-averse preferences for decision making with uncertain outcomes.
Despite theoretically appealing, the application of dominance in machine learning has been scarce.
We first generalize the dominance concept to enable feasible comparisons between any arbitrary pair of random variables.
We then develop a simple and efficient approach for finding the optimal solution in terms of dominance.
arXiv Detail & Related papers (2024-02-05T03:21:23Z) - Domain Generalization without Excess Empirical Risk [83.26052467843725]
A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty.
We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization.
We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk.
arXiv Detail & Related papers (2023-08-30T08:46:46Z) - Tailoring to the Tails: Risk Measures for Fine-Grained Tail Sensitivity [10.482805367361818]
Expected risk rearrangement (ERM) is at the core of machine learning systems.
We propose a general approach to construct risk measures which exhibit a desired tail sensitivity.
arXiv Detail & Related papers (2022-08-05T09:51:18Z) - Supervised Learning with General Risk Functionals [28.918233583859134]
Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class.
We establish the first uniform convergence results for estimating the CDF of the loss distribution, yielding guarantees that hold simultaneously both over all H"older risk functionals and over all hypotheses.
arXiv Detail & Related papers (2022-06-27T22:11:05Z) - Risk averse non-stationary multi-armed bandits [0.0]
This paper tackles the risk averse multi-armed bandits problem when incurred losses are non-stationary.
Two estimation methods are proposed for this objective function in the presence of non-stationary losses.
Such estimates can then be embedded into classic arm selection methods such as epsilon-greedy policies.
arXiv Detail & Related papers (2021-09-28T18:34:54Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z) - Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds
Globally Optimal Policy [95.98698822755227]
We make the first attempt to study risk-sensitive deep reinforcement learning under the average reward setting with the variance risk criteria.
We propose an actor-critic algorithm that iteratively and efficiently updates the policy, the Lagrange multiplier, and the Fenchel dual variable.
arXiv Detail & Related papers (2020-12-28T05:02:26Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.