Variation-Bounded Loss for Noise-Tolerant Learning
- URL: http://arxiv.org/abs/2511.12143v1
- Date: Sat, 15 Nov 2025 10:15:29 GMT
- Title: Variation-Bounded Loss for Noise-Tolerant Learning
- Authors: Jialiang Wang, Xiong Zhou, Xianming Liu, Gangfeng Hu, Deming Zhai, Junjun Jiang, Haoliang Li,
- Abstract summary: We introduce the Variation Ratio as a novel property related to the robustness of loss functions.<n>We propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio.
- Score: 105.20373602308284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mitigating the negative impact of noisy labels has been aperennial issue in supervised learning. Robust loss functions have emerged as a prevalent solution to this problem. In this work, we introduce the Variation Ratio as a novel property related to the robustness of loss functions, and propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio. We provide theoretical analyses of the variation ratio, proving that a smaller variation ratio would lead to better robustness. Furthermore, we reveal that the variation ratio provides a feasible method to relax the symmetric condition and offers a more concise path to achieve the asymmetric condition. Based on the variation ratio, we reformulate several commonly used loss functions into a variation-bounded form for practical applications. Positive experiments on various datasets exhibit the effectiveness and flexibility of our approach.
Related papers
- Fair Regression under Demographic Parity: A Unified Framework [12.36726423996741]
Our framework is applicable to a broad spectrum of regression tasks.<n>We derive a novel characterization of the fair risk minimizer.<n>We illustrate the method's versatility through detailed discussions.
arXiv Detail & Related papers (2026-01-15T17:41:28Z) - Variational bagging: a robust approach for Bayesian uncertainty quantification [3.1932150827796675]
We introduce a variational bagging approach that integrates a bagging procedure with variational Bayes.<n>We establish strong theoretical guarantees, including posterior contraction rates for general models.<n>We illustrate our variational bagging method in numerical studies through applications to parametric models, finite mixture models, deep neural networks, and variational autoencoders (VAEs)
arXiv Detail & Related papers (2025-11-25T18:24:17Z) - Closing the Performance Gap in Biometric Cryptosystems: A Deeper Analysis on Unlinkable Fuzzy Vaults [3.092212810857262]
We identify unstable error correction capabilities, which are caused by variable feature set sizes and their influence on similarity thresholds.<n>We propose a novel feature quantization method based on itequal frequent intervals<n>The proposed approach significantly reduces the performance gap introduced by template protection.
arXiv Detail & Related papers (2025-06-27T15:57:58Z) - Risk-Averse Reinforcement Learning with Itakura-Saito Loss [63.620958078179356]
Risk-averse agents choose policies that minimize risk, occasionally sacrificing expected value.<n>We introduce a numerically stable and mathematically sound loss function based on the Itakura-Saito divergence for learning state-value and action-value functions.<n>In the experimental section, we explore multiple scenarios, some with known analytical solutions, and show that the considered loss function outperforms the alternatives.
arXiv Detail & Related papers (2025-05-22T17:18:07Z) - Conditional Temporal Neural Processes with Covariance Loss [19.805881561847492]
We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes.<n>We conduct extensive sets of experiments on real-world datasets with state-of-the-art models.
arXiv Detail & Related papers (2025-04-01T13:51:44Z) - STATE: A Robust ATE Estimator of Heavy-Tailed Metrics for Variance Reduction in Online Controlled Experiments [22.32661807469984]
We develop a novel framework that integrates the Student's t-distribution with machine learning tools to fit heavy-tailed metrics.
By adopting a variational EM method to optimize the loglikehood function, we can infer a robust solution that greatly eliminates the negative impact of outliers.
Both simulations on synthetic data and long-term empirical results on Meituan experiment platform demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-23T09:35:59Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Convergence analysis of equilibrium methods for inverse problems [2.812291464467386]
We introduce implicit non-variational (INV) regularization, where approximate solutions are defined as solutions of (A*(A x - ydelta) + alpha R(x) = 0) for some regularization operator (R)<n>When the regularization operator is the gradient of a functional, INV reduces to classical variational regularization.
arXiv Detail & Related papers (2023-06-02T10:22:33Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - Correcting Confounding via Random Selection of Background Variables [15.206717158865022]
We propose a novel criterion for identifying causal relationship based on the stability of regression coefficients of X on Y.
We prove, subject to a symmetry assumption for the background influence, that V converges to zero if and only if X contains no causal drivers.
arXiv Detail & Related papers (2022-02-04T14:27:10Z) - An Equivalence between Loss Functions and Non-Uniform Sampling in
Experience Replay [72.23433407017558]
We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function.
Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance.
arXiv Detail & Related papers (2020-07-12T17:45:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.