Training image classifiers using Semi-Weak Label Data
- URL: http://arxiv.org/abs/2103.10608v1
- Date: Fri, 19 Mar 2021 03:06:07 GMT
- Title: Training image classifiers using Semi-Weak Label Data
- Authors: Anxiang Zhang, Ankit Shah, Bhiksha Raj
- Abstract summary: In Multiple Instance learning (MIL), weak labels are provided at the bag level with only presence/absence information known.
This paper introduces a novel semi-weak label learning paradigm as a middle ground to mitigate the problem.
We propose a two-stage framework to address the problem of learning from semi-weak labels.
- Score: 26.04162590798731
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In Multiple Instance learning (MIL), weak labels are provided at the bag
level with only presence/absence information known. However, there is a
considerable gap in performance in comparison to a fully supervised model,
limiting the practical applicability of MIL approaches. Thus, this paper
introduces a novel semi-weak label learning paradigm as a middle ground to
mitigate the problem. We define semi-weak label data as data where we know the
presence or absence of a given class and the exact count of each class as
opposed to knowing the label proportions. We then propose a two-stage framework
to address the problem of learning from semi-weak labels. It leverages the fact
that counting information is non-negative and discrete. Experiments are
conducted on generated samples from CIFAR-10. We compare our model with a
fully-supervised setting baseline, a weakly-supervised setting baseline and
learning from pro-portion (LLP) baseline. Our framework not only outperforms
both baseline models for MIL-based weakly super-vised setting and learning from
proportion setting, but also gives comparable results compared to the fully
supervised model. Further, we conduct thorough ablation studies to analyze
across datasets and variation with batch size, losses architectural changes,
bag size and regularization
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Easy Learning from Label Proportions [17.71834385754893]
Easyllp is a flexible and simple-to-implement debiasing approach based on aggregate labels.
Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level.
arXiv Detail & Related papers (2023-02-06T20:41:38Z) - A semi-supervised Teacher-Student framework for surgical tool detection
and localization [2.41710192205034]
We introduce a semi-supervised learning (SSL) framework in surgical tool detection paradigm.
In the proposed work, we train a model with labeled data which initialises the Teacher-Student joint learning.
Our results on m2cai16-tool-locations dataset indicate the superiority of our approach on different supervised data settings.
arXiv Detail & Related papers (2022-08-21T17:21:31Z) - Label-Noise Learning with Intrinsically Long-Tailed Data [65.41318436799993]
We propose a learning framework for label-noise learning with intrinsically long-tailed data.
Specifically, we propose two-stage bi-dimensional sample selection (TABASCO) to better separate clean samples from noisy samples.
arXiv Detail & Related papers (2022-08-21T07:47:05Z) - Self-Adaptive Label Augmentation for Semi-supervised Few-shot
Classification [121.63992191386502]
Few-shot classification aims to learn a model that can generalize well to new tasks when only a few labeled samples are available.
We propose a semi-supervised few-shot classification method that assigns an appropriate label to each unlabeled sample by a manually defined metric.
A major novelty of SALA is the task-adaptive metric, which can learn the metric adaptively for different tasks in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-16T13:14:03Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - From Consensus to Disagreement: Multi-Teacher Distillation for
Semi-Supervised Relation Extraction [10.513626483108126]
Semi-supervised relation extraction (SSRE) has been proven to be a promising way for this problem through annotating unlabeled samples as additional training data.
However, the difference set, which contains rich information about unlabeled data, has been long neglected by prior studies.
We develop a simple and general multi-teacher distillation framework, which can be easily integrated into any existing SSRE methods.
arXiv Detail & Related papers (2021-12-02T08:20:23Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.