One-bit Supervision for Image Classification: Problem, Solution, and
Beyond
- URL: http://arxiv.org/abs/2311.15225v1
- Date: Sun, 26 Nov 2023 07:39:00 GMT
- Title: One-bit Supervision for Image Classification: Problem, Solution, and
Beyond
- Authors: Hengtong Hu, Lingxi Xie, Xinyue Hue, Richang Hong, Qi Tian
- Abstract summary: This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
- Score: 114.95815360508395
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents one-bit supervision, a novel setting of learning with
fewer labels, for image classification. Instead of training model using the
accurate label of each sample, our setting requires the model to interact with
the system by predicting the class label of each sample and learn from the
answer whether the guess is correct, which provides one bit (yes or no) of
information. An intriguing property of the setting is that the burden of
annotation largely alleviates in comparison to offering the accurate label.
There are two keys to one-bit supervision, which are (i) improving the guess
accuracy and (ii) making good use of the incorrect guesses. To achieve these
goals, we propose a multi-stage training paradigm and incorporate negative
label suppression into an off-the-shelf semi-supervised learning algorithm.
Theoretical analysis shows that one-bit annotation is more efficient than
full-bit annotation in most cases and gives the conditions of combining our
approach with active learning. Inspired by this, we further integrate the
one-bit supervision framework into the self-supervised learning algorithm which
yields an even more efficient training schedule. Different from training from
scratch, when self-supervised learning is used for initialization, both hard
example mining and class balance are verified effective in boosting the
learning performance. However, these two frameworks still need full-bit labels
in the initial stage. To cast off this burden, we utilize unsupervised domain
adaptation to train the initial model and conduct pure one-bit annotations on
the target dataset. In multiple benchmarks, the learning efficiency of the
proposed approach surpasses that using full-bit, semi-supervised supervision.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime [0.810304644344495]
Self-supervised contrastive learning is an effective approach for addressing the challenge of limited labelled data.
We evaluate the method's performance for both the single-label and multi-label classification tasks.
arXiv Detail & Related papers (2024-10-10T10:20:16Z) - Self-Training: A Survey [5.772546394254112]
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled observations and a large set of unlabeled observations.
Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years.
We present self-training methods for binary and multi-class classification; as well as their variants and two related approaches.
arXiv Detail & Related papers (2022-02-24T11:40:44Z) - Barely-Supervised Learning: Semi-Supervised Learning with very few
labeled images [16.905389887406894]
We analyze in depth the behavior of a state-of-the-art semi-supervised method, FixMatch, which relies on a weakly-augmented version of an image to obtain supervision signal.
We show that it frequently fails in barely-supervised scenarios, due to a lack of training signal when no pseudo-label can be predicted with high confidence.
We propose a method to leverage self-supervised methods that provides training signal in the absence of confident pseudo-labels.
arXiv Detail & Related papers (2021-12-22T16:29:10Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Training image classifiers using Semi-Weak Label Data [26.04162590798731]
In Multiple Instance learning (MIL), weak labels are provided at the bag level with only presence/absence information known.
This paper introduces a novel semi-weak label learning paradigm as a middle ground to mitigate the problem.
We propose a two-stage framework to address the problem of learning from semi-weak labels.
arXiv Detail & Related papers (2021-03-19T03:06:07Z) - One-bit Supervision for Image Classification [121.87598671087494]
One-bit supervision is a novel setting of learning from incomplete annotations.
We propose a multi-stage training paradigm which incorporates negative label suppression into an off-the-shelf semi-supervised learning algorithm.
arXiv Detail & Related papers (2020-09-14T03:06:23Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.