Learning with Multiple Complementary Labels
- URL: http://arxiv.org/abs/1912.12927v4
- Date: Sat, 6 Aug 2022 10:47:03 GMT
- Title: Learning with Multiple Complementary Labels
- Authors: Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama
- Abstract summary: A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers.
We propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.
- Score: 94.8064553345801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A complementary label (CL) simply indicates an incorrect class of an example,
but learning with CLs results in multi-class classifiers that can predict the
correct class. Unfortunately, the problem setting only allows a single CL for
each example, which notably limits its potential since our labelers may easily
identify multiple CLs (MCLs) to one example. In this paper, we propose a novel
problem setting to allow MCLs for each example and two ways for learning with
MCLs. In the first way, we design two wrappers that decompose MCLs into many
single CLs, so that we could use any method for learning with CLs. However, the
supervision information that MCLs hold is conceptually diluted after
decomposition. Thus, in the second way, we derive an unbiased risk estimator;
minimizing it processes each set of MCLs as a whole and possesses an estimation
error bound. We further improve the second way into minimizing properly chosen
upper bounds. Experiments show that the former way works well for learning with
MCLs but the latter is even better.
Related papers
- Many-Shot In-Context Learning [58.395589302800566]
Large language models (LLMs) excel at few-shot in-context learning (ICL)
We observe significant performance gains across a wide variety of generative and discriminative tasks.
Unlike few-shot learning, many-shot learning is effective at overriding pretraining biases.
arXiv Detail & Related papers (2024-04-17T02:49:26Z) - CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models [23.398619576886375]
Continual learning (CL) aims to help deep neural networks learn new knowledge while retaining what has been learned.
Our work proposes Continual LeArning with Probabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task.
arXiv Detail & Related papers (2024-03-28T04:15:58Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - On the Generalization of Multi-modal Contrastive Learning [21.849681446573257]
We study how MMCL extracts useful visual representation from multi-modal pairs.
We show that text pairs induce more semantically consistent and diverse positive pairs, which, according to our analysis, provably benefit downstream generalization.
Inspired by this finding, we propose CLIP-guided resampling methods to significantly improve the downstream performance of SSCL on ImageNet.
arXiv Detail & Related papers (2023-06-07T09:13:56Z) - Learning in Imperfect Environment: Multi-Label Classification with
Long-Tailed Distribution and Partial Labels [53.68653940062605]
We introduce a novel task, Partial labeling and Long-Tailed Multi-Label Classification (PLT-MLC)
We find that most LT-MLC and PL-MLC approaches fail to solve the degradation-MLC.
We propose an end-to-end learning framework: textbfCOrrection $rightarrow$ textbfModificattextbfIon $rightarrow$ balantextbfCe.
arXiv Detail & Related papers (2023-04-20T20:05:08Z) - Beyond Supervised Continual Learning: a Review [69.9674326582747]
Continual Learning (CL) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.
Changes in the data distribution can cause the so-called catastrophic forgetting (CF) effect: an abrupt loss of previous knowledge.
This article reviews literature that study CL in other settings, such as learning with reduced supervision, fully unsupervised learning, and reinforcement learning.
arXiv Detail & Related papers (2022-08-30T14:44:41Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.