Fast learning from label proportions with small bags
- URL: http://arxiv.org/abs/2110.03426v2
- Date: Fri, 8 Oct 2021 08:34:15 GMT
- Title: Fast learning from label proportions with small bags
- Authors: Denis Baru\v{c}i\'c (1), Jan Kybic (1) ((1) Czech Technical University
in Prague, Czech Republic)
- Abstract summary: In learning from label proportions (LLP), the instances are grouped into bags, and the task is to learn an instance classifier given relative class proportions in training bags.
In this work, we focus on the case of small bags, which allows designing more efficient algorithms by explicitly considering all consistent label combinations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In learning from label proportions (LLP), the instances are grouped into
bags, and the task is to learn an instance classifier given relative class
proportions in training bags. LLP is useful when obtaining individual instance
labels is impossible or costly.
In this work, we focus on the case of small bags, which allows designing more
efficient algorithms by explicitly considering all consistent label
combinations. In particular, we propose an EM algorithm alternating between
optimizing a general neural network instance classifier and incorporating
bag-level annotations. In comparison to existing deep LLP methods, our approach
converges faster to a comparable or better solution. Several experiments were
performed on two different datasets.
Related papers
- PAC Learning Linear Thresholds from Label Proportions [13.58949814915442]
Learning from label proportions (LLP) is a generalization of supervised learning.
We show that it is possible to efficiently learn LTFs using LTFs when given access to random bags of some label proportion.
We include an experimental evaluation of our learning algorithms along with a comparison with those of [Saket'21, Saket'22] and random LTFs.
arXiv Detail & Related papers (2023-10-16T05:59:34Z) - Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation [18.57840057487926]
Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training.
This setting arises in domains like advertising and medicine due to privacy considerations.
We propose a novel algorithmic framework for this problem that iteratively performs two main steps.
arXiv Detail & Related papers (2023-10-12T06:09:26Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning [22.07044031105496]
Learning representations for individual instances when only bag-level labels are available is a challenge in multiple instance learning (MIL)
We propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR)
It improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels.
arXiv Detail & Related papers (2022-10-17T21:43:32Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Active Learning in Incomplete Label Multiple Instance Multiple Label
Learning [17.5720245903743]
We propose a novel bag-class pair based approach for active learning in the MIML setting.
Our approach is based on a discriminative graphical model with efficient and exact inference.
arXiv Detail & Related papers (2021-07-22T17:01:28Z) - How to distribute data across tasks for meta-learning? [59.608652082495624]
We show that the optimal number of data points per task depends on the budget, but it converges to a unique constant value for large budgets.
Our results suggest a simple and efficient procedure for data collection.
arXiv Detail & Related papers (2021-03-15T15:38:47Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.