Harnessing Superclasses for Learning from Hierarchical Databases
- URL: http://arxiv.org/abs/2411.16438v1
- Date: Mon, 25 Nov 2024 14:39:52 GMT
- Title: Harnessing Superclasses for Learning from Hierarchical Databases
- Authors: Nicolas Urbani, Sylvain Rousseau, Yves Grandvalet, Leonardo Tanzi,
- Abstract summary: In many large-scale classification problems, classes are organized in a known hierarchy, typically represented as a tree.
We introduce a loss for this type of supervised hierarchical classification.
Our approach does not entail any significant additional computational cost compared with the loss of cross-entropy.
- Score: 1.835004446596942
- License:
- Abstract: In many large-scale classification problems, classes are organized in a known hierarchy, typically represented as a tree expressing the inclusion of classes in superclasses. We introduce a loss for this type of supervised hierarchical classification. It utilizes the knowledge of the hierarchy to assign each example not only to a class but also to all encompassing superclasses. Applicable to any feedforward architecture with a softmax output layer, this loss is a proper scoring rule, in that its expectation is minimized by the true posterior class probabilities. This property allows us to simultaneously pursue consistent classification objectives between superclasses and fine-grained classes, and eliminates the need for a performance trade-off between different granularities. We conduct an experimental study on three reference benchmarks, in which we vary the size of the training sets to cover a diverse set of learning scenarios. Our approach does not entail any significant additional computational cost compared with the loss of cross-entropy. It improves accuracy and reduces the number of coarse errors, with predicted labels that are distant from ground-truth labels in the tree.
Related papers
- Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Enhancing Classification with Hierarchical Scalable Query on Fusion
Transformer [0.4129225533930965]
This paper proposes a method to boost fine-grained classification through a hierarchical approach via learnable independent query embeddings.
We exploit the idea of hierarchy to learn query embeddings that are scalable across all levels.
Our method is able to outperform the existing methods with an improvement of 11% at the fine-grained classification.
arXiv Detail & Related papers (2023-02-28T11:00:55Z) - Hierarchical classification at multiple operating points [1.520694326234112]
We present an efficient algorithm to produce operating characteristic curves for any method that assigns a score to every class in the hierarchy.
We propose two novel loss functions and show that a soft variant of the structured hinge loss is able to significantly outperform the flat baseline.
arXiv Detail & Related papers (2022-10-19T23:36:16Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - Rank-based loss for learning hierarchical representations [7.421724671710886]
In machine learning, the family of methods that use the 'extra' information is called hierarchical classification.
Here we focus on how to integrate the hierarchical information of a problem to learn embeddings representative of the hierarchical relationships.
We show that rank based loss is suitable to learn hierarchical representations of the data.
arXiv Detail & Related papers (2021-10-11T10:32:45Z) - Multi-Class Classification from Single-Class Data with Confidences [90.48669386745361]
We propose an empirical risk minimization framework that is loss-/model-/optimizer-independent.
We show that our method can be Bayes-consistent with a simple modification even if the provided confidences are highly noisy.
arXiv Detail & Related papers (2021-06-16T15:38:13Z) - No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained
Classification Problems [20.253644336965042]
In real-world classification tasks, each class often comprises multiple finer-grained "subclasses"
As the subclass labels are frequently unavailable, models trained using only the coarser-grained class labels often exhibit highly variable performance across different subclasses.
We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown.
arXiv Detail & Related papers (2020-11-25T18:50:32Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance
Segmentation [75.93960390191262]
We exploit prior knowledge of the relations among object categories to cluster fine-grained classes into coarser parent classes.
We propose a simple yet effective resampling method, NMS Resampling, to re-balance the data distribution.
Our method, termed as Forest R-CNN, can serve as a plug-and-play module being applied to most object recognition models.
arXiv Detail & Related papers (2020-08-13T03:52:37Z) - Hierarchical Class-Based Curriculum Loss [18.941207332233805]
Most real world data have dependencies between labels, which can be captured by using a hierarchy.
We propose a loss function, hierarchical curriculum loss, with two properties: (i) satisfy hierarchical constraints present in the label space, and (ii) provide non-uniform weights to labels based on their levels in the hierarchy.
arXiv Detail & Related papers (2020-06-05T18:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.