Introducing Fractional Classification Loss for Robust Learning with Noisy Labels
- URL: http://arxiv.org/abs/2508.06346v1
- Date: Fri, 08 Aug 2025 14:20:52 GMT
- Title: Introducing Fractional Classification Loss for Robust Learning with Noisy Labels
- Authors: Mert Can Kurucu, Tufan Kumbasar, İbrahim Eksin, Müjde Güzelkaya,
- Abstract summary: We introduce Fractional Classification Loss (FCL), an adaptive robust loss that automatically calibrates its robustness to label noise during training.<n>FCL achieves state-of-the-art results without the need for manual hyper parameter tuning.
- Score: 2.312414367096445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust loss functions are crucial for training deep neural networks in the presence of label noise, yet existing approaches require extensive, dataset-specific hyperparameter tuning. In this work, we introduce Fractional Classification Loss (FCL), an adaptive robust loss that automatically calibrates its robustness to label noise during training. Built within the active-passive loss framework, FCL employs the fractional derivative of the Cross-Entropy (CE) loss as its active component and the Mean Absolute Error (MAE) as its passive loss component. With this formulation, we demonstrate that the fractional derivative order $\mu$ spans a family of loss functions that interpolate between MAE-like robustness and CE-like fast convergence. Furthermore, we integrate $\mu$ into the gradient-based optimization as a learnable parameter and automatically adjust it to optimize the trade-off between robustness and convergence speed. We reveal that FCL's unique property establishes a critical trade-off that enables the stable learning of $\mu$: lower log penalties on difficult or mislabeled examples improve robustness but impose higher penalties on easy or clean data, reducing model confidence in them. Consequently, FCL can dynamically reshape its loss landscape to achieve effective classification performance under label noise. Extensive experiments on benchmark datasets show that FCL achieves state-of-the-art results without the need for manual hyperparameter tuning.
Related papers
- DropoutTS: Sample-Adaptive Dropout for Robust Time Series Forecasting [59.868414584142336]
DropoutTS is a model-agnostic plugin that shifts the paradigm from "what" to "how much" to learn.<n>It maps noise to adaptive dropout rates - selectively suppressing spurious fluctuations while preserving fine-grained fidelity.
arXiv Detail & Related papers (2026-01-29T13:49:20Z) - Noise-Robust Tiny Object Localization with Flows [63.60972031108944]
We propose a noise-robust localization framework leveraging normalizing flows for flexible error modeling and uncertainty-guided optimization.<n>Our method captures complex, non-Gaussian prediction distributions through flow-based error modeling, enabling robust learning under noisy supervision.<n>An uncertainty-aware gradient modulation mechanism further suppresses learning from high-uncertainty, noise-prone samples, mitigating overfitting while stabilizing training.
arXiv Detail & Related papers (2026-01-02T09:16:55Z) - Winning the Pruning Gamble: A Unified Approach to Joint Sample and Token Pruning for Efficient Supervised Fine-Tuning [71.30276778807068]
We propose a unified framework that strategically coordinates sample pruning and token pruning.<n>Q-Tuning achieves a +38% average improvement over the full-data SFT baseline using only 12.5% of the original training data.
arXiv Detail & Related papers (2025-09-28T13:27:38Z) - FedEFC: Federated Learning Using Enhanced Forward Correction Against Noisy Labels [5.885238773559016]
Federated Learning (FL) is a powerful framework for privacy-preserving distributed learning.<n> handling noisy labels in FL remains a major challenge due to heterogeneous data distributions and communication constraints.<n>We propose FedEFC, a novel method designed to tackle the impact of noisy labels in FL.
arXiv Detail & Related papers (2025-04-08T02:14:50Z) - Active Negative Loss: A Robust Framework for Learning with Noisy Labels [26.853357479214004]
Noise-robust loss functions offer an effective solution for enhancing learning in the presence of label noise.<n>We introduce a novel loss function class, termed Normalized Negative Loss Functions (NNLFs), which serve as passive loss functions within the APL framework.<n>In non-symmetric noise scenarios, we propose an entropy-based regularization technique to mitigate the vulnerability to the label imbalance.
arXiv Detail & Related papers (2024-12-03T11:00:15Z) - Advancing RVFL networks: Robust classification with the HawkEye loss function [0.0]
We propose incorporation of HawkEye loss (H-loss) function into the Random vector functional link (RVFL) framework.
H-loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone.
The proposed H-RVFL model's effectiveness is validated through experiments on $40$ datasets from UCI and KEEL repositories.
arXiv Detail & Related papers (2024-10-01T08:48:05Z) - Training More Robust Classification Model via Discriminative Loss and Gaussian Noise Injection [7.535952418691443]
We introduce a loss function applied at the penultimate layer that explicitly enforces intra-class compactness.<n>We also propose a class-wise feature alignment mechanism that brings noisy data clusters closer to their clean counterparts.<n>Our approach significantly reinforces model robustness to various perturbations while maintaining high accuracy on clean data.
arXiv Detail & Related papers (2024-05-28T18:10:45Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Label Distributionally Robust Losses for Multi-class Classification:
Consistency, Robustness and Adaptivity [55.29408396918968]
We study a family of loss functions named label-distributionally robust (LDR) losses for multi-class classification.
Our contributions include both consistency and robustness by establishing top-$k$ consistency of LDR losses for multi-class classification.
We propose a new adaptive LDR loss that automatically adapts the individualized temperature parameter to the noise degree of class label of each instance.
arXiv Detail & Related papers (2021-12-30T00:27:30Z) - Learning Adaptive Loss for Robust Learning with Noisy Labels [59.06189240645958]
Robust loss is an important strategy for handling robust learning issue.
We propose a meta-learning method capable of robust hyper tuning.
Four kinds of SOTA loss functions are attempted to be minimization, general availability and effectiveness.
arXiv Detail & Related papers (2020-02-16T00:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.