Ordinal Adaptive Correction: A Data-Centric Approach to Ordinal Image Classification with Noisy Labels
- URL: http://arxiv.org/abs/2509.02351v1
- Date: Tue, 02 Sep 2025 14:17:16 GMT
- Title: Ordinal Adaptive Correction: A Data-Centric Approach to Ordinal Image Classification with Noisy Labels
- Authors: Alireza Sedighi Moghaddam, Mohammad Reza Mohammadi,
- Abstract summary: ORDinal Adaptive Correction (ORDAC) is proposed for adaptive correction of noisy labels.<n>During training, ORDAC dynamically adjusts the mean and standard deviation of the label distribution for each sample.<n>Results show that ORDAC and its extended versions lead to significant improvements in model performance.
- Score: 0.9023847175654603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Labeled data is a fundamental component in training supervised deep learning models for computer vision tasks. However, the labeling process, especially for ordinal image classification where class boundaries are often ambiguous, is prone to error and noise. Such label noise can significantly degrade the performance and reliability of machine learning models. This paper addresses the problem of detecting and correcting label noise in ordinal image classification tasks. To this end, a novel data-centric method called ORDinal Adaptive Correction (ORDAC) is proposed for adaptive correction of noisy labels. The proposed approach leverages the capabilities of Label Distribution Learning (LDL) to model the inherent ambiguity and uncertainty present in ordinal labels. During training, ORDAC dynamically adjusts the mean and standard deviation of the label distribution for each sample. Rather than discarding potentially noisy samples, this approach aims to correct them and make optimal use of the entire training dataset. The effectiveness of the proposed method is evaluated on benchmark datasets for age estimation (Adience) and disease severity detection (Diabetic Retinopathy) under various asymmetric Gaussian noise scenarios. Results show that ORDAC and its extended versions (ORDAC_C and ORDAC_R) lead to significant improvements in model performance. For instance, on the Adience dataset with 40% noise, ORDAC_R reduced the mean absolute error from 0.86 to 0.62 and increased the recall metric from 0.37 to 0.49. The method also demonstrated its effectiveness in correcting intrinsic noise present in the original datasets. This research indicates that adaptive label correction using label distributions is an effective strategy to enhance the robustness and accuracy of ordinal classification models in the presence of noisy data.
Related papers
- Sharpness-aware Dynamic Anchor Selection for Generalized Category Discovery [61.694524826522205]
Given some labeled data of known classes, GCD aims to cluster unlabeled data that contain both known and unknown classes.<n>Large pre-trained models have a preference for some specific visual patterns, resulting in encoding spurious correlation for unlabeled data.<n>We propose a novel method, which contains two modules: Loss Sharpness Penalty (LSP) and Dynamic Anchor Selection (DAS)
arXiv Detail & Related papers (2025-12-15T02:24:06Z) - Optimal Labeler Assignment and Sampling for Active Learning in the Presence of Imperfect Labels [13.796089124499318]
We propose a novel AL framework to construct a robust classification model by minimizing noise levels.<n>Our approach includes an assignment model that optimally assigns query points to labelers, aiming to minimize the maximum possible noise within each cycle.<n>Our experiments demonstrate that our approach significantly improves classification performance compared to several benchmark methods.
arXiv Detail & Related papers (2025-12-14T23:06:37Z) - Set a Thief to Catch a Thief: Combating Label Noise through Noisy Meta Learning [6.68999525326685]
Learning from noisy labels (LNL) aims to train high-performance deep models using noisy datasets.<n>We propose a novel noisy meta label correction framework STCT, which counterintuitively uses noisy data to correct label noise.<n> STCT achieves 96.9% label correction and 95.2% classification performance on CIFAR-10 with 80% symmetric noise.
arXiv Detail & Related papers (2025-02-22T05:58:01Z) - Efficient Adaptive Label Refinement for Label Noise Learning [14.617885790129336]
We propose Adaptive Label Refinement (ALR) to avoid incorrect labels and thoroughly learning clean samples.<n>ALR is simple and efficient, requiring no prior knowledge of noise or auxiliary datasets.<n>We validate ALR's effectiveness through experiments on benchmark datasets with artificial label noise (CIFAR-10/100) and real-world datasets with inherent noise (ANIMAL-10N, Clothing1M, WebVision)
arXiv Detail & Related papers (2025-02-01T09:58:08Z) - Fair-OBNC: Correcting Label Noise for Fairer Datasets [9.427445881721814]
biases in the training data are sometimes related to label noise.
Models trained on such biased data may perpetuate or even aggravate the biases with respect to sensitive information.
We propose Fair-OBNC, a label noise correction method with fairness considerations.
arXiv Detail & Related papers (2024-10-08T17:18:18Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Systematic analysis of the impact of label noise correction on ML
Fairness [0.0]
We develop an empirical methodology to evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets.
Our results suggest that the Hybrid Label Noise Correction method achieves the best trade-off between predictive performance and fairness.
arXiv Detail & Related papers (2023-06-28T08:08:14Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - A Self-Refinement Strategy for Noise Reduction in Grammatical Error
Correction [54.569707226277735]
Existing approaches for grammatical error correction (GEC) rely on supervised learning with manually created GEC datasets.
There is a non-negligible amount of "noise" where errors were inappropriately edited or left uncorrected.
We propose a self-refinement method where the key idea is to denoise these datasets by leveraging the prediction consistency of existing models.
arXiv Detail & Related papers (2020-10-07T04:45:09Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.