Deep Imbalanced Regression via Hierarchical Classification Adjustment
- URL: http://arxiv.org/abs/2310.17154v1
- Date: Thu, 26 Oct 2023 04:54:39 GMT
- Title: Deep Imbalanced Regression via Hierarchical Classification Adjustment
- Authors: Haipeng Xiong, Angela Yao
- Abstract summary: Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
- Score: 50.19438850112964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regression tasks in computer vision, such as age estimation or counting, are
often formulated into classification by quantizing the target space into
classes. Yet real-world data is often imbalanced -- the majority of training
samples lie in a head range of target values, while a minority of samples span
a usually larger tail range. By selecting the class quantization, one can
adjust imbalanced regression targets into balanced classification outputs,
though there are trade-offs in balancing classification accuracy and
quantization error. To improve regression performance over the entire range of
data, we propose to construct hierarchical classifiers for solving imbalanced
regression tasks. The fine-grained classifiers limit the quantization error
while being modulated by the coarse predictions to ensure high accuracy.
Standard hierarchical classification approaches, however, when applied to the
regression problem, fail to ensure that predicted ranges remain consistent
across the hierarchy. As such, we propose a range-preserving distillation
process that can effectively learn a single classifier from the set of
hierarchical classifiers. Our novel hierarchical classification adjustment
(HCA) for imbalanced regression shows superior results on three diverse tasks:
age estimation, crowd counting and depth estimation. We will release the source
code upon acceptance.
Related papers
- Generalization bounds for regression and classification on adaptive covering input domains [1.4141453107129398]
We focus on the generalization bound, which serves as an upper limit for the generalization error.
In the case of classification tasks, we treat the target function as a one-hot, a piece-wise constant function, and employ 0/1 loss for error measurement.
arXiv Detail & Related papers (2024-07-29T05:40:08Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Generating Unbiased Pseudo-labels via a Theoretically Guaranteed
Chebyshev Constraint to Unify Semi-supervised Classification and Regression [57.17120203327993]
threshold-to-pseudo label process (T2L) in classification uses confidence to determine the quality of label.
In nature, regression also requires unbiased methods to generate high-quality labels.
We propose a theoretically guaranteed constraint for generating unbiased labels based on Chebyshev's inequality.
arXiv Detail & Related papers (2023-11-03T08:39:35Z) - No One Left Behind: Improving the Worst Categories in Long-Tailed
Learning [29.89394406438639]
We argue that under such an evaluation setting, some categories are inevitably sacrificed.
We propose a simple plug-in method that is applicable to a wide range of methods.
arXiv Detail & Related papers (2023-03-07T03:24:54Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Flexible Model Aggregation for Quantile Regression [92.63075261170302]
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions.
We investigate methods for aggregating any number of conditional quantile models.
All of the models we consider in this paper can be fit using modern deep learning toolkits.
arXiv Detail & Related papers (2021-02-26T23:21:16Z) - Binary Classification: Counterbalancing Class Imbalance by Applying
Regression Models in Combination with One-Sided Label Shifts [0.4970364068620607]
We introduce a novel method, which addresses the issues of class imbalance.
We generate a set of negative and positive target labels, such that the corresponding regression task becomes balanced.
We evaluate our approach on a number of publicly available data sets and compare our proposed method to one of the most popular oversampling techniques.
arXiv Detail & Related papers (2020-11-30T13:24:47Z) - Initial Classifier Weights Replay for Memoryless Class Incremental
Learning [11.230170401360633]
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times.
We propose a different approach based on a vanilla fine tuning backbone.
We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting.
arXiv Detail & Related papers (2020-08-31T16:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.