Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework
- URL: http://arxiv.org/abs/2204.13207v1
- Date: Wed, 27 Apr 2022 21:41:44 GMT
- Title: Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework
- Authors: Shu Zhang and Ran Xu and Caiming Xiong and Chetan Ramaiah
- Abstract summary: We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
- Score: 75.79736930414715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current contrastive learning frameworks focus on leveraging a single
supervisory signal to learn representations, which limits the efficacy on
unseen data and downstream tasks. In this paper, we present a hierarchical
multi-label representation learning framework that can leverage all available
labels and preserve the hierarchical relationship between classes. We introduce
novel hierarchy preserving losses, which jointly apply a hierarchical penalty
to the contrastive loss, and enforce the hierarchy constraint. The loss
function is data driven and automatically adapts to arbitrary multi-label
structures. Experiments on several datasets show that our
relationship-preserving embedding performs well on a variety of tasks and
outperform the baseline supervised and self-supervised approaches. Code is
available at https://github.com/salesforce/hierarchicalContrastiveLearning.
Related papers
- Harnessing Superclasses for Learning from Hierarchical Databases [1.835004446596942]
In many large-scale classification problems, classes are organized in a known hierarchy, typically represented as a tree.
We introduce a loss for this type of supervised hierarchical classification.
Our approach does not entail any significant additional computational cost compared with the loss of cross-entropy.
arXiv Detail & Related papers (2024-11-25T14:39:52Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - SEAL: Simultaneous Label Hierarchy Exploration And Learning [9.701914280306118]
We propose a new framework that explores the label hierarchy by augmenting the observed labels with latent labels that follow a prior hierarchical structure.
Our approach uses a 1-Wasserstein metric over the tree metric space as an objective function, which enables us to simultaneously learn a data-driven label hierarchy and perform (semi-supervised) learning.
arXiv Detail & Related papers (2023-04-26T08:31:59Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - Multi-Label Continual Learning using Augmented Graph Convolutional
Network [7.115602040521868]
Multi-Label Continual Learning builds a class-incremental framework in a sequential multi-label image recognition data stream.
The study proposes an Augmented Graph Convolutional Network (AGCN++) that can construct the cross-task label relationships in MLCL.
The proposed method is evaluated using two multi-label image benchmarks.
arXiv Detail & Related papers (2022-11-27T08:40:19Z) - Label Hierarchy Transition: Delving into Class Hierarchies to Enhance
Deep Classifiers [40.993137740456014]
We propose a unified probabilistic framework based on deep learning to address the challenges of hierarchical classification.
The proposed framework can be readily adapted to any existing deep network with only minor modifications.
We extend our proposed LHT framework to the skin lesion diagnosis task and validate its great potential in computer-aided diagnosis.
arXiv Detail & Related papers (2021-12-04T14:58:36Z) - MATCH: Metadata-Aware Text Classification in A Large Hierarchy [60.59183151617578]
MATCH is an end-to-end framework that leverages both metadata and hierarchy information.
We propose different ways to regularize the parameters and output probability of each child label by its parents.
Experiments on two massive text datasets with large-scale label hierarchies demonstrate the effectiveness of MATCH.
arXiv Detail & Related papers (2021-02-15T05:23:08Z) - Exploring the Hierarchy in Relation Labels for Scene Graph Generation [75.88758055269948]
The proposed method can improve several state-of-the-art baselines by a large margin (up to $33%$ relative gain) in terms of Recall@50.
Experiments show that the proposed simple yet effective method can improve several state-of-the-art baselines by a large margin.
arXiv Detail & Related papers (2020-09-12T17:36:53Z) - Hierarchical Class-Based Curriculum Loss [18.941207332233805]
Most real world data have dependencies between labels, which can be captured by using a hierarchy.
We propose a loss function, hierarchical curriculum loss, with two properties: (i) satisfy hierarchical constraints present in the label space, and (ii) provide non-uniform weights to labels based on their levels in the hierarchy.
arXiv Detail & Related papers (2020-06-05T18:48:57Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.