Boosting Long-tailed Object Detection via Step-wise Learning on
Smooth-tail Data
- URL: http://arxiv.org/abs/2305.12833v1
- Date: Mon, 22 May 2023 08:53:50 GMT
- Title: Boosting Long-tailed Object Detection via Step-wise Learning on
Smooth-tail Data
- Authors: Na Dong and Yongqiang Zhang and Mingli Ding and Gim Hee Lee
- Abstract summary: We build smooth-tail data where the long-tailed distribution of categories decays smoothly to correct the bias towards head classes.
We fine-tune the class-agnostic modules of the pre-trained model on the head class dominant replay data.
We train a unified model on the tail class dominant replay data while transferring knowledge from the head class expert model to ensure accurate detection of all categories.
- Score: 60.64535309016623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world data tends to follow a long-tailed distribution, where the class
imbalance results in dominance of the head classes during training. In this
paper, we propose a frustratingly simple but effective step-wise learning
framework to gradually enhance the capability of the model in detecting all
categories of long-tailed datasets. Specifically, we build smooth-tail data
where the long-tailed distribution of categories decays smoothly to correct the
bias towards head classes. We pre-train a model on the whole long-tailed data
to preserve discriminability between all categories. We then fine-tune the
class-agnostic modules of the pre-trained model on the head class dominant
replay data to get a head class expert model with improved decision boundaries
from all categories. Finally, we train a unified model on the tail class
dominant replay data while transferring knowledge from the head class expert
model to ensure accurate detection of all categories. Extensive experiments on
long-tailed datasets LVIS v0.5 and LVIS v1.0 demonstrate the superior
performance of our method, where we can improve the AP with ResNet-50 backbone
from 27.0% to 30.3% AP, and especially for the rare categories from 15.5% to
24.9% AP. Our best model using ResNet-101 backbone can achieve 30.7% AP, which
suppresses all existing detectors using the same backbone.
Related papers
- Learning from Limited and Imperfect Data [6.30667368422346]
We develop practical algorithms for Deep Neural Networks that can learn from limited and imperfect data present in the real world.
These works are divided into four segments, each covering a scenario of learning from limited or imperfect data.
arXiv Detail & Related papers (2024-11-11T18:48:31Z) - FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.