Active Learning with Label Comparisons
- URL: http://arxiv.org/abs/2204.04670v1
- Date: Sun, 10 Apr 2022 12:13:46 GMT
- Title: Active Learning with Label Comparisons
- Authors: Gal Yona, Shay Moran, Gal Elidan, Amir Globerson
- Abstract summary: We show that finding the best of $k$ labels can be done with $k-1$ active queries.
Key element in our analysis is the "label neighborhood graph" of the true distribution.
- Score: 41.82179028046654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised learning typically relies on manual annotation of the true labels.
When there are many potential classes, searching for the best one can be
prohibitive for a human annotator. On the other hand, comparing two candidate
labels is often much easier. We focus on this type of pairwise supervision and
ask how it can be used effectively in learning, and in particular in active
learning. We obtain several insightful results in this context. In principle,
finding the best of $k$ labels can be done with $k-1$ active queries. We show
that there is a natural class where this approach is sub-optimal, and that
there is a more comparison-efficient active learning scheme. A key element in
our analysis is the "label neighborhood graph" of the true distribution, which
has an edge between two classes if they share a decision boundary. We also show
that in the PAC setting, pairwise comparisons cannot provide improved sample
complexity in the worst case. We complement our theoretical results with
experiments, clearly demonstrating the effect of the neighborhood graph on
sample complexity.
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - On the Informativeness of Supervision Signals [31.418827619510036]
We use information theory to compare how a number of commonly-used supervision signals contribute to representation-learning performance.
Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization.
arXiv Detail & Related papers (2022-11-02T18:02:31Z) - Better Few-Shot Relation Extraction with Label Prompt Dropout [7.939146925759088]
We present a novel approach called label prompt dropout, which randomly removes label descriptions in the learning process.
Our experiments show that our approach is able to lead to improved class representations, yielding significantly better results on the few-shot relation extraction task.
arXiv Detail & Related papers (2022-10-25T03:03:09Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Teaching an Active Learner with Contrastive Examples [35.926575235046634]
We study the problem of active learning with the added twist that the learner is assisted by a helpful teacher.
We investigate an efficient teaching algorithm that adaptively picks contrastive examples.
We derive strong performance guarantees for our algorithm based on two problem-dependent parameters.
arXiv Detail & Related papers (2021-10-28T05:00:55Z) - Fast learning from label proportions with small bags [0.0]
In learning from label proportions (LLP), the instances are grouped into bags, and the task is to learn an instance classifier given relative class proportions in training bags.
In this work, we focus on the case of small bags, which allows designing more efficient algorithms by explicitly considering all consistent label combinations.
arXiv Detail & Related papers (2021-10-07T13:11:18Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Are Fewer Labels Possible for Few-shot Learning? [81.89996465197392]
Few-shot learning is challenging due to its very limited data and labels.
Recent studies in big transfer (BiT) show that few-shot learning can greatly benefit from pretraining on large scale labeled dataset in a different domain.
We propose eigen-finetuning to enable fewer shot learning by leveraging the co-evolution of clustering and eigen-samples in the finetuning.
arXiv Detail & Related papers (2020-12-10T18:59:29Z) - Efficient PAC Learning from the Crowd with Pairwise Comparison [7.594050968868919]
We study the problem of PAC learning threshold functions from the crowd, where the annotators can provide (noisy) labels or pairwise comparison tags.
We design a label-efficient algorithm that interleaves learning and annotation, which leads to a constant overhead of our algorithm.
arXiv Detail & Related papers (2020-11-02T16:37:55Z) - Pointwise Binary Classification with Pairwise Confidence Comparisons [97.79518780631457]
We propose pairwise comparison (Pcomp) classification, where we have only pairs of unlabeled data that we know one is more likely to be positive than the other.
We link Pcomp classification to noisy-label learning to develop a progressive URE and improve it by imposing consistency regularization.
arXiv Detail & Related papers (2020-10-05T09:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.