Exploiting Counter-Examples for Active Learning with Partial labels
- URL: http://arxiv.org/abs/2307.07413v1
- Date: Fri, 14 Jul 2023 15:41:53 GMT
- Title: Exploiting Counter-Examples for Active Learning with Partial labels
- Authors: Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus
Kalander, Chen Gong, Jianye Hao, Bo Han
- Abstract summary: This paper studies a new problem, emphactive learning with partial labels (ALPL)
In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.
We propose a simple but effective WorseNet to directly learn from this pattern.
- Score: 45.665996618836516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies a new problem, \emph{active learning with partial labels}
(ALPL). In this setting, an oracle annotates the query samples with partial
labels, relaxing the oracle from the demanding accurate labeling process. To
address ALPL, we first build an intuitive baseline that can be seamlessly
incorporated into existing AL frameworks. Though effective, this baseline is
still susceptible to the \emph{overfitting}, and falls short of the
representative partial-label-based samples during the query process. Drawing
inspiration from human inference in cognitive science, where accurate
inferences can be explicitly derived from \emph{counter-examples} (CEs), our
objective is to leverage this human-like learning pattern to tackle the
\emph{overfitting} while enhancing the process of selecting representative
samples in ALPL. Specifically, we construct CEs by reversing the partial labels
for each instance, and then we propose a simple but effective WorseNet to
directly learn from this complementary pattern. By leveraging the distribution
gap between WorseNet and the predictor, this adversarial evaluation manner
could enhance both the performance of the predictor itself and the sample
selection process, allowing the predictor to capture more accurate patterns in
the data. Experimental results on five real-world datasets and four benchmark
datasets show that our proposed method achieves comprehensive improvements over
ten representative AL frameworks, highlighting the superiority of WorseNet. The
source code will be available at \url{https://github.com/Ferenas/APLL}.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Neighbour Consistency Guided Pseudo-Label Refinement for Unsupervised
Person Re-Identification [80.98291772215154]
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations.
Recent advances accomplish this task by leveraging clustering-based pseudo labels.
We propose a Neighbour Consistency guided Pseudo Label Refinement framework.
arXiv Detail & Related papers (2022-11-30T09:39:57Z) - Active Learning by Feature Mixing [52.16150629234465]
We propose a novel method for batch active learning called ALFA-Mix.
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions.
We show that inconsistencies in these predictions help discovering features that the model is unable to recognise in the unlabelled instances.
arXiv Detail & Related papers (2022-03-14T12:20:54Z) - PAL : Pretext-based Active Learning [2.869739951301252]
We propose an active learning technique for deep neural networks that is more robust to mislabeling than the previously proposed techniques.
We use a separate network to score the unlabeled samples for selection.
The resultant technique also produces competitive accuracy in the absence of label noise.
arXiv Detail & Related papers (2020-10-29T21:16:37Z) - Coping with Label Shift via Distributionally Robust Optimisation [72.80971421083937]
We propose a model that minimises an objective based on distributionally robust optimisation (DRO)
We then design and analyse a gradient descent-proximal mirror ascent algorithm tailored for large-scale problems to optimise the proposed objective.
arXiv Detail & Related papers (2020-10-23T08:33:04Z) - SoQal: Selective Oracle Questioning for Consistency Based Active
Learning of Cardiac Signals [17.58391771585294]
Clinical settings are often characterized by abundant unlabelled data and limited labelled data.
One way to mitigate this burden is via active learning (AL) which involves the (a) acquisition and (b) annotation of informative unlabelled instances.
We show that BALC can outperform start-of-the-art acquisition functions such as BALD, and SoQal outperforms baseline methods even in the presence of a noisy oracle.
arXiv Detail & Related papers (2020-04-20T18:20:03Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.