Meta Objective Guided Disambiguation for Partial Label Learning
- URL: http://arxiv.org/abs/2208.12459v2
- Date: Fri, 22 Dec 2023 07:42:59 GMT
- Title: Meta Objective Guided Disambiguation for Partial Label Learning
- Authors: Bo-Shi Zou, Ming-Kun Xie, Sheng-Jun Huang
- Abstract summary: We propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD)
MoGD aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
The proposed method can be easily implemented by using various deep networks with the ordinary SGD.
- Score: 44.05801303440139
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label learning (PLL) is a typical weakly supervised learning
framework, where each training instance is associated with a candidate label
set, among which only one label is valid. To solve PLL problems, typically
methods try to perform disambiguation for candidate sets by either using prior
knowledge, such as structure information of training data, or refining model
outputs in a self-training manner. Unfortunately, these methods often fail to
obtain a favorable performance due to the lack of prior information or
unreliable predictions in the early stage of model training. In this paper, we
propose a novel framework for partial label learning with meta objective guided
disambiguation (MoGD), which aims to recover the ground-truth label from
candidate labels set by solving a meta objective on a small validation set.
Specifically, to alleviate the negative impact of false positive labels, we
re-weight each candidate label based on the meta loss on the validation set.
Then, the classifier is trained by minimizing the weighted cross entropy loss.
The proposed method can be easily implemented by using various deep networks
with the ordinary SGD optimizer. Theoretically, we prove the convergence
property of meta objective and derive the estimation error bounds of the
proposed method. Extensive experiments on various benchmark datasets and
real-world PLL datasets demonstrate that the proposed method can achieve
competent performance when compared with the state-of-the-art methods.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Partial-Label Regression [54.74984751371617]
Partial-label learning is a weakly supervised learning setting that allows each training example to be annotated with a set of candidate labels.
Previous studies on partial-label learning only focused on the classification setting where candidate labels are all discrete.
In this paper, we provide the first attempt to investigate partial-label regression, where each training example is annotated with a set of real-valued candidate labels.
arXiv Detail & Related papers (2023-06-15T09:02:24Z) - A Deep Model for Partial Multi-Label Image Classification with Curriculum Based Disambiguation [42.0958430465578]
We study the partial multi-label (PML) image classification problem.
Existing PML methods typically design a disambiguation strategy to filter out noisy labels.
We propose a deep model for PML to enhance the representation and discrimination ability.
arXiv Detail & Related papers (2022-07-06T02:49:02Z) - Few-shot Learning via Dependency Maximization and Instance Discriminant
Analysis [21.8311401851523]
We study the few-shot learning problem, where a model learns to recognize new objects with extremely few labeled data per category.
We propose a simple approach to exploit unlabeled data accompanying the few-shot task for improving few-shot performance.
arXiv Detail & Related papers (2021-09-07T02:19:01Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.