Lower Bounds on Adversarial Robustness for Multiclass Classification with General Loss Functions
- URL: http://arxiv.org/abs/2510.01969v1
- Date: Thu, 02 Oct 2025 12:42:36 GMT
- Title: Lower Bounds on Adversarial Robustness for Multiclass Classification with General Loss Functions
- Authors: Camilo Andrés García Trillos, Nicolás García Trillos,
- Abstract summary: We consider adversarially robust classification in a multiclass setting under arbitrary loss functions.<n>We derive dual and barycentric reformulations of the corresponding learner-agnostic robust risk problem.
- Score: 4.562056072136493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider adversarially robust classification in a multiclass setting under arbitrary loss functions and derive dual and barycentric reformulations of the corresponding learner-agnostic robust risk minimization problem. We provide explicit characterizations for important cases such as the cross-entropy loss, loss functions with a power form, and the quadratic loss, extending in this way available results for the 0-1 loss. These reformulations enable efficient computation of sharp lower bounds for adversarial risks and facilitate the design of robust classifiers beyond the 0-1 loss setting. Our paper uncovers interesting connections between adversarial robustness, $\alpha$-fair packing problems, and generalized barycenter problems for arbitrary positive measures where Kullback-Leibler and Tsallis entropies are used as penalties. Our theoretical results are accompanied with illustrative numerical experiments where we obtain tighter lower bounds for adversarial risks with the cross-entropy loss function.
Related papers
- Fair Regression under Demographic Parity: A Unified Framework [12.36726423996741]
Our framework is applicable to a broad spectrum of regression tasks.<n>We derive a novel characterization of the fair risk minimizer.<n>We illustrate the method's versatility through detailed discussions.
arXiv Detail & Related papers (2026-01-15T17:41:28Z) - Fundamental Novel Consistency Theory: $H$-Consistency Bounds [19.493449206135296]
In machine learning, the loss functions optimized during training often differ from the target loss that defines task performance.<n>We present an in-depth study of the target loss estimation error relative to the surrogate loss estimation error.<n>Our analysis leads to $H$-consistency bounds, which are guarantees accounting for the hypothesis set $H$.
arXiv Detail & Related papers (2025-12-28T11:02:20Z) - Variation-Bounded Loss for Noise-Tolerant Learning [105.20373602308284]
We introduce the Variation Ratio as a novel property related to the robustness of loss functions.<n>We propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio.
arXiv Detail & Related papers (2025-11-15T10:15:29Z) - Rectifying Regression in Reinforcement Learning [51.28909745713678]
We show that mean absolute error is a better prediction objective than the traditional mean squared error for controlling the learned policy's suboptimality gap.<n>We present results that different loss functions are better aligned with these different regression objectives.
arXiv Detail & Related papers (2025-10-01T13:32:07Z) - Risk-Averse Reinforcement Learning with Itakura-Saito Loss [63.620958078179356]
Risk-averse agents choose policies that minimize risk, occasionally sacrificing expected value.<n>We introduce a numerically stable and mathematically sound loss function based on the Itakura-Saito divergence for learning state-value and action-value functions.<n>In the experimental section, we explore multiple scenarios, some with known analytical solutions, and show that the considered loss function outperforms the alternatives.
arXiv Detail & Related papers (2025-05-22T17:18:07Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.<n>Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.<n>We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified
Models [9.65010022854885]
We show that adversarial risk is equivalent to the risk induced by a distributional adversarial attack under certain smoothness conditions.
To evaluate the generalization performance of the adversarial estimator, we study the adversarial excess risk.
arXiv Detail & Related papers (2023-09-02T00:51:19Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - General Loss Functions Lead to (Approximate) Interpolation in High Dimensions [5.653716495767272]
We provide a unified framework that applies to a general family of convex losses across binary and multiclass settings.<n>Specifically, we show that the implicit bias is approximated (but not exactly equal to) the minimum-norm in high dimensions.
arXiv Detail & Related papers (2023-03-13T21:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.