Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels
- URL: http://arxiv.org/abs/2211.07846v1
- Date: Tue, 15 Nov 2022 02:11:20 GMT
- Title: Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels
- Authors: Tao Pu, Qianru Lao, Hefeng Wu, Tianshui Chen, Liang Lin
- Abstract summary: Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
- Score: 78.88007892742438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a promising solution of reducing annotation cost, training multi-label
models with partial positive labels (MLR-PPL), in which merely few positive
labels are known while other are missing, attracts increasing attention. Due to
the absence of any negative labels, previous works regard unknown labels as
negative and adopt traditional MLR algorithms. To reject noisy labels, recent
works regard large loss samples as noise but ignore the semantic correlation
different multi-label images. In this work, we propose to explore semantic
correlation among different images to facilitate the MLR-PPL task.
Specifically, we design a unified framework, Category-Adaptive Label Discovery
and Noise Rejection, that discovers unknown labels and rejects noisy labels for
each category in an adaptive manner. The framework consists of two
complementary modules: (1) Category-Adaptive Label Discovery module first
measures the semantic similarity between positive samples and then complement
unknown labels with high similarities; (2) Category-Adaptive Noise Rejection
module first computes the sample weights based on semantic similarities from
different samples and then discards noisy labels with low weights. Besides, we
propose a novel category-adaptive threshold updating that adaptively adjusts
the threshold, to avoid the time-consuming manual tuning process. Extensive
experiments demonstrate that our proposed method consistently outperforms
current leading algorithms.
Related papers
- Multi-Label Noise Transition Matrix Estimation with Label Correlations:
Theory and Algorithm [73.94839250910977]
Noisy multi-label learning has garnered increasing attention due to the challenges posed by collecting large-scale accurate labels.
The introduction of transition matrices can help model multi-label noise and enable the development of statistically consistent algorithms.
We propose a novel estimator that leverages label correlations without the need for anchor points or precise fitting of noisy class posteriors.
arXiv Detail & Related papers (2023-09-22T08:35:38Z) - Partial Label Supervision for Agnostic Generative Noisy Label Learning [18.29334728940232]
Noisy label learning has been tackled with both discriminative and generative approaches.
We propose a novel framework for generative noisy label learning that addresses these challenges.
arXiv Detail & Related papers (2023-08-02T14:48:25Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - PASS: Peer-Agreement based Sample Selection for training with Noisy Labels [16.283722126438125]
The prevalence of noisy-label samples poses a significant challenge in deep learning, inducing overfitting effects.
Current methodologies often rely on the small-loss hypothesis or feature-based selection to separate noisy- and clean-label samples.
We propose a new noisy-label detection method, termed Peer-Agreement based Sample Selection (PASS), to address this problem.
arXiv Detail & Related papers (2023-03-20T00:35:33Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - On the Effects of Different Types of Label Noise in Multi-Label Remote
Sensing Image Classification [1.6758573326215689]
The development of accurate methods for multi-label classification (MLC) of remote sensing (RS) images is one of the most important research topics in RS.
The use of deep neural networks that require a high number of reliable training images annotated by multiple land-cover class labels (multi-labels) have been found popular in RS.
In this paper, we investigate three different noise robust CV SLC methods and adapt them to be robust for multi-label noise scenarios in RS.
arXiv Detail & Related papers (2022-07-28T09:38:30Z) - Joint Class-Affinity Loss Correction for Robust Medical Image
Segmentation with Noisy Labels [22.721870430220598]
noisy labels prevent medical image segmentation algorithms from learning precise semantic correlations.
We present a novel perspective for noisy mitigation by incorporating both pixel-wise and pair-wise manners.
We propose a robust Joint Class-Affinity (JCAS) framework to combat label noise issues in medical image segmentation.
arXiv Detail & Related papers (2022-06-16T08:19:33Z) - Dual-Perspective Semantic-Aware Representation Blending for Multi-Label
Image Recognition with Partial Labels [70.36722026729859]
We propose a dual-perspective semantic-aware representation blending (DSRB) that blends multi-granularity category-specific semantic representation across different images.
The proposed DS consistently outperforms current state-of-the-art algorithms on all proportion label settings.
arXiv Detail & Related papers (2022-05-26T00:33:44Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.