One-bit Supervision for Image Classification
- URL: http://arxiv.org/abs/2009.06168v3
- Date: Tue, 11 May 2021 06:35:01 GMT
- Title: One-bit Supervision for Image Classification
- Authors: Hengtong Hu, Lingxi Xie, Zewei Du, Richang Hong, Qi Tian
- Abstract summary: One-bit supervision is a novel setting of learning from incomplete annotations.
We propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
- Score: 121.87598671087494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents one-bit supervision, a novel setting of learning from
incomplete annotations, in the scenario of image classification. Instead of
training a model upon the accurate label of each sample, our setting requires
the model to query with a predicted label of each sample and learn from the
answer whether the guess is correct. This provides one bit (yes or no) of
information, and more importantly, annotating each sample becomes much easier
than finding the accurate label from many candidate classes. There are two keys
to training a model upon one-bit supervision: improving the guess accuracy and
making use of incorrect guesses. For these purposes, we propose a multi-stage
training paradigm which incorporates negative label suppression into an
off-the-shelf semi-supervised learning algorithm. In three popular image
classification benchmarks, our approach claims higher efficiency in utilizing
the limited amount of annotations.
Related papers
- Pre-Trained Vision-Language Models as Partial Annotators [40.89255396643592]
Pre-trained vision-language models learn massive data to model unified representations of images and natural languages.
In this paper, we investigate a novel "pre-trained annotating - weakly-supervised learning" paradigm for pre-trained model application and experiment on image classification tasks.
arXiv Detail & Related papers (2024-05-23T17:17:27Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - An analysis of over-sampling labeled data in semi-supervised learning
with FixMatch [66.34968300128631]
Most semi-supervised learning methods over-sample labeled data when constructing training mini-batches.
This paper studies whether this common practice improves learning and how.
We compare it to an alternative setting where each mini-batch is uniformly sampled from all the training data, labeled or not.
arXiv Detail & Related papers (2022-01-03T12:22:26Z) - Barely-Supervised Learning: Semi-Supervised Learning with very few
labeled images [16.905389887406894]
We analyze in depth the behavior of a state-of-the-art semi-supervised method, FixMatch, which relies on a weakly-augmented version of an image to obtain supervision signal.
We show that it frequently fails in barely-supervised scenarios, due to a lack of training signal when no pseudo-label can be predicted with high confidence.
We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.
arXiv Detail & Related papers (2021-12-22T16:29:10Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.