Structured Prediction with Partial Labelling through the Infimum Loss
- URL: http://arxiv.org/abs/2003.00920v2
- Date: Wed, 9 Sep 2020 14:34:17 GMT
- Title: Structured Prediction with Partial Labelling through the Infimum Loss
- Authors: Vivien Cabannes, Alessandro Rudi, Francis Bach
- Abstract summary: The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
- Score: 85.4940853372503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Annotating datasets is one of the main costs in nowadays supervised learning.
The goal of weak supervision is to enable models to learn using only forms of
labelling which are cheaper to collect, as partial labelling. This is a type of
incomplete annotation where, for each datapoint, supervision is cast as a set
of labels containing the real one. The problem of supervised learning with
partial labelling has been studied for specific instances such as
classification, multi-label, ranking or segmentation, but a general framework
is still missing. This paper provides a unified framework based on structured
prediction and on the concept of infimum loss to deal with partial labelling
over a wide family of learning problems and loss functions. The framework leads
naturally to explicit algorithms that can be easily implemented and for which
proved statistical consistency and learning rates. Experiments confirm the
superiority of the proposed approach over commonly used baselines.
Related papers
- A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - A Unified Positive-Unlabeled Learning Framework for Document-Level
Relation Extraction with Different Levels of Labeling [5.367772036988716]
Document-level relation extraction (RE) aims to identify relations between entities across multiple sentences.
We propose a unified positive-unlabeled learning framework - shift and squared ranking loss.
Our method achieves an improvement of about 14 F1 points relative to the previous baseline with incomplete labeling.
arXiv Detail & Related papers (2022-10-17T02:54:49Z) - Query-Adaptive Predictive Inference with Partial Labels [0.0]
We propose a new methodology to construct predictive sets using only partially labeled data on top of black-box predictive models.
Our experiments highlight the validity of our predictive set construction as well as the attractiveness of a more flexible user-dependent loss framework.
arXiv Detail & Related papers (2022-06-15T01:48:42Z) - Learning from Label Proportions by Learning with Label Noise [30.7933303912474]
Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags.
We provide a theoretically grounded approach to LLP based on a reduction to learning with label noise.
Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures.
arXiv Detail & Related papers (2022-03-04T18:52:21Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - A Flexible Class of Dependence-aware Multi-Label Loss Functions [4.265467042008983]
This paper introduces a new class of loss functions for multi-label classification.
It overcomes disadvantages of commonly used losses such as Hamming and subset 0/1.
The assessment of multi-labels in terms of these losses is illustrated in an empirical study.
arXiv Detail & Related papers (2020-11-02T07:42:15Z) - Weakly-Supervised Salient Object Detection via Scribble Annotations [54.40518383782725]
We propose a weakly-supervised salient object detection model to learn saliency from scribble labels.
We present a new metric, termed saliency structure measure, to measure the structure alignment of the predicted saliency maps.
Our method not only outperforms existing weakly-supervised/unsupervised methods, but also is on par with several fully-supervised state-of-the-art models.
arXiv Detail & Related papers (2020-03-17T12:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.