Hierarchical Class-Based Curriculum Loss
- URL: http://arxiv.org/abs/2006.03629v1
- Date: Fri, 5 Jun 2020 18:48:57 GMT
- Title: Hierarchical Class-Based Curriculum Loss
- Authors: Palash Goyal and Shalini Ghosh
- Abstract summary: Most real world data have dependencies between labels, which can be captured by using a hierarchy.
We propose a loss function, hierarchical curriculum loss, with two properties: (i) satisfy hierarchical constraints present in the label space, and (ii) provide non-uniform weights to labels based on their levels in the hierarchy.
- Score: 18.941207332233805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classification algorithms in machine learning often assume a flat label
space. However, most real world data have dependencies between the labels,
which can often be captured by using a hierarchy. Utilizing this relation can
help develop a model capable of satisfying the dependencies and improving model
accuracy and interpretability. Further, as different levels in the hierarchy
correspond to different granularities, penalizing each label equally can be
detrimental to model learning. In this paper, we propose a loss function,
hierarchical curriculum loss, with two properties: (i) satisfy hierarchical
constraints present in the label space, and (ii) provide non-uniform weights to
labels based on their levels in the hierarchy, learned implicitly by the
training paradigm. We theoretically show that the proposed loss function is a
tighter bound of 0-1 loss compared to any other loss satisfying the
hierarchical constraints. We test our loss function on real world image data
sets, and show that it significantly substantially outperforms multiple
baselines.
Related papers
- Losses over Labels: Weakly Supervised Learning via Direct Loss
Construction [71.11337906077483]
Programmable weak supervision is a growing paradigm within machine learning.
We propose Losses over Labels (LoL) as it creates losses directly from ofs without going through the intermediate step of a label.
We show that LoL improves upon existing weak supervision methods on several benchmark text and image classification tasks.
arXiv Detail & Related papers (2022-12-13T22:29:14Z) - Learning Hierarchy Aware Features for Reducing Mistake Severity [3.704832909610283]
We propose a novel approach for learning hierarchy aware features (HAF)
HAF is a training time approach that improves the mistakes while maintaining top-1 error, thereby, addressing the problem of cross-entropy loss that treats all mistakes as equal.
We evaluate HAF on three hierarchical datasets and achieve state-of-the-art results on the iNaturalist-19 and CIFAR-100 datasets.
arXiv Detail & Related papers (2022-07-26T04:24:47Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - Label Hierarchy Transition: Delving into Class Hierarchies to Enhance
Deep Classifiers [40.993137740456014]
We propose a unified probabilistic framework based on deep learning to address the challenges of hierarchical classification.
The proposed framework can be readily adapted to any existing deep network with only minor modifications.
We extend our proposed LHT framework to the skin lesion diagnosis task and validate its great potential in computer-aided diagnosis.
arXiv Detail & Related papers (2021-12-04T14:58:36Z) - Rank-based loss for learning hierarchical representations [7.421724671710886]
In machine learning, the family of methods that use the 'extra' information is called hierarchical classification.
Here we focus on how to integrate the hierarchical information of a problem to learn embeddings representative of the hierarchical relationships.
We show that rank based loss is suitable to learn hierarchical representations of the data.
arXiv Detail & Related papers (2021-10-11T10:32:45Z) - Hierarchical Proxy-based Loss for Deep Metric Learning [32.10423536428467]
Proxy-based metric learning losses are superior to pair-based losses due to their fast convergence and low training complexity.
We present a framework that leverages this implicit hierarchy by imposing a hierarchical structure on the proxies.
Results demonstrate that our hierarchical proxy-based loss framework improves the performance of existing proxy-based losses.
arXiv Detail & Related papers (2021-03-25T00:38:33Z) - Learning by Minimizing the Sum of Ranked Range [58.24935359348289]
We introduce the sum of ranked range (SoRR) as a general approach to form learning objectives.
A ranked range is a consecutive sequence of sorted values of a set of real numbers.
We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification.
arXiv Detail & Related papers (2020-10-05T01:58:32Z) - Exploring the Hierarchy in Relation Labels for Scene Graph Generation [75.88758055269948]
The proposed method can improve several state-of-the-art baselines by a large margin (up to $33%$ relative gain) in terms of Recall@50.
Experiments show that the proposed simple yet effective method can improve several state-of-the-art baselines by a large margin.
arXiv Detail & Related papers (2020-09-12T17:36:53Z) - Self-Learning with Rectification Strategy for Human Parsing [73.06197841003048]
We propose a trainable graph reasoning method to correct two typical errors in the pseudo-labels.
The reconstructed features have a stronger ability to represent the topology structure of the human body.
Our method outperforms other state-of-the-art methods in supervised human parsing tasks.
arXiv Detail & Related papers (2020-04-17T03:51:30Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.