Rectify the Regression Bias in Long-Tailed Object Detection
- URL: http://arxiv.org/abs/2401.15885v2
- Date: Wed, 31 Jan 2024 12:41:05 GMT
- Title: Rectify the Regression Bias in Long-Tailed Object Detection
- Authors: Ke Zhu, Minghao Fu, Jie Shao, Tianyu Liu, Jianxin Wu
- Abstract summary: Long-tailed object detection faces great challenges because of its extremely imbalanced class distribution.
Recent methods mainly focus on the classification bias and its loss function design, while ignoring the subtle influence of the regression branch.
This paper shows that the regression bias exists and does adversely and seriously impact the detection accuracy.
- Score: 29.34827806854778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed object detection faces great challenges because of its extremely
imbalanced class distribution. Recent methods mainly focus on the
classification bias and its loss function design, while ignoring the subtle
influence of the regression branch. This paper shows that the regression bias
exists and does adversely and seriously impact the detection accuracy. While
existing methods fail to handle the regression bias, the class-specific
regression head for rare classes is hypothesized to be the main cause of it in
this paper. As a result, three kinds of viable solutions to cater for the rare
categories are proposed, including adding a class-agnostic branch, clustering
heads and merging heads. The proposed methods brings in consistent and
significant improvements over existing long-tailed detection methods,
especially in rare and common classes. The proposed method achieves
state-of-the-art performance in the large vocabulary LVIS dataset with
different backbones and architectures. It generalizes well to more difficult
evaluation metrics, relatively balanced datasets, and the mask branch. This is
the first attempt to reveal and explore rectifying of the regression bias in
long-tailed object detection.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems [102.95119281306893]
We present an early trial to explore adversarial training methods to optimize AUC.
We reformulate the AUC optimization problem as a saddle point problem, where the objective becomes an instance-wise function.
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
arXiv Detail & Related papers (2022-06-24T09:13:39Z) - Certifying Data-Bias Robustness in Linear Regression [12.00314910031517]
We present a technique for certifying whether linear regression models are pointwise-robust to label bias in a training dataset.
We show how to solve this problem exactly for individual test points, and provide an approximate but more scalable method.
We also unearth gaps in bias-robustness, such as high levels of non-robustness for certain bias assumptions on some datasets.
arXiv Detail & Related papers (2022-06-07T20:47:07Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Learning to Rank Anomalies: Scalar Performance Criteria and Maximization
of Two-Sample Rank Statistics [0.0]
We propose a data-driven scoring function defined on the feature space which reflects the degree of abnormality of the observations.
This scoring function is learnt through a well-designed binary classification problem.
We illustrate our methodology with preliminary encouraging numerical experiments.
arXiv Detail & Related papers (2021-09-20T14:45:56Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Long-Tailed Classification by Keeping the Good and Removing the Bad
Momentum Causal Effect [95.37587481952487]
Long-tailed classification is the key to deep learning at scale.
Existing methods are mainly based on re-weighting/resamplings that lack a fundamental theory.
In this paper, we establish a causal inference framework, which not only unravels the whys of previous methods, but also derives a new principled solution.
arXiv Detail & Related papers (2020-09-28T00:32:11Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.