Test-Time Amendment with a Coarse Classifier for Fine-Grained
Classification
- URL: http://arxiv.org/abs/2302.00368v2
- Date: Mon, 30 Oct 2023 09:21:32 GMT
- Title: Test-Time Amendment with a Coarse Classifier for Fine-Grained
Classification
- Authors: Kanishk Jain, Shyamgopal Karthik, Vineet Gandhi
- Abstract summary: We present a novel approach for Post-Hoc Correction called Hierarchical Ensembles (HiE)
HiE utilizes label hierarchy to improve the performance of fine-grained classification at test-time using the coarse-grained predictions.
Our approach brings notable gains in top-1 accuracy while significantly decreasing the severity of mistakes as training data decreases for the fine-grained classes.
- Score: 10.719054378755981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the problem of reducing mistake severity for fine-grained
classification. Fine-grained classification can be challenging, mainly due to
the requirement of domain expertise for accurate annotation. However, humans
are particularly adept at performing coarse classification as it requires
relatively low levels of expertise. To this end, we present a novel approach
for Post-Hoc Correction called Hierarchical Ensembles (HiE) that utilizes label
hierarchy to improve the performance of fine-grained classification at
test-time using the coarse-grained predictions. By only requiring the parents
of leaf nodes, our method significantly reduces avg. mistake severity while
improving top-1 accuracy on the iNaturalist-19 and tieredImageNet-H datasets,
achieving a new state-of-the-art on both benchmarks. We also investigate the
efficacy of our approach in the semi-supervised setting. Our approach brings
notable gains in top-1 accuracy while significantly decreasing the severity of
mistakes as training data decreases for the fine-grained classes. The
simplicity and post-hoc nature of HiE renders it practical to be used with any
off-the-shelf trained model to improve its predictions further.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - A Saliency-based Clustering Framework for Identifying Aberrant
Predictions [49.1574468325115]
We introduce the concept of aberrant predictions, emphasizing that the nature of classification errors is as critical as their frequency.
We propose a novel, efficient training methodology aimed at both reducing the misclassification rate and discerning aberrant predictions.
We apply this methodology to the less-explored domain of veterinary radiology, where the stakes are high but have not been as extensively studied compared to human medicine.
arXiv Detail & Related papers (2023-11-11T01:53:59Z) - Generating Unbiased Pseudo-labels via a Theoretically Guaranteed
Chebyshev Constraint to Unify Semi-supervised Classification and Regression [57.17120203327993]
threshold-to-pseudo label process (T2L) in classification uses confidence to determine the quality of label.
In nature, regression also requires unbiased methods to generate high-quality labels.
We propose a theoretically guaranteed constraint for generating unbiased labels based on Chebyshev's inequality.
arXiv Detail & Related papers (2023-11-03T08:39:35Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Soft ascent-descent as a stable and flexible alternative to flooding [6.527016551650139]
We propose a softened, pointwise mechanism called SoftAD that downweights points on the borderline, limits the effects of outliers, and retains the ascent-descent effect of flooding.
We demonstrate how SoftAD can realize classification accuracy competitive with flooding while enjoying a much smaller loss generalization gap and model norm.
arXiv Detail & Related papers (2023-10-16T02:02:56Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Hierarchical Average Precision Training for Pertinent Image Retrieval [0.0]
This paper introduces a new hierarchical AP training method for pertinent image retrieval (HAP-PIER)
HAP-PIER is based on a new H-AP metric, which integrates errors' importance and better evaluate rankings.
Experiments on 6 datasets show that HAPPIER significantly outperforms state-of-the-art methods for hierarchical retrieval.
arXiv Detail & Related papers (2022-07-05T07:55:18Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Coarse2Fine: Fine-grained Text Classification on Coarsely-grained
Annotated Data [22.81068960545234]
We introduce a new problem called coarse-to-fine grained classification, which aims to perform fine-grained classification on coarsely annotated data.
Instead of asking for new fine-grained human annotations, we opt to leverage label surface names as the only human guidance.
Our framework uses the fine-tuned generative models to sample pseudo-training data for training the classifier, and bootstraps on real unlabeled data for model refinement.
arXiv Detail & Related papers (2021-09-22T17:29:01Z) - No Cost Likelihood Manipulation at Test Time for Making Better Mistakes
in Deep Networks [17.55334996757232]
We use Conditional Risk Minimization ( CRM) framework for hierarchy-aware classification.
Given a cost matrix and a reliable estimate of likelihoods, CRM simply amends mistakes at inference time.
It significantly outperforms the state-of-the-art and consistently obtains large reductions in the average hierarchical distance of top-$k$ predictions.
arXiv Detail & Related papers (2021-04-01T22:40:25Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.