Equalized Focal Loss for Dense Long-Tailed Object Detection
- URL: http://arxiv.org/abs/2201.02593v1
- Date: Fri, 7 Jan 2022 18:35:58 GMT
- Title: Equalized Focal Loss for Dense Long-Tailed Object Detection
- Authors: Bo Li, Yongqiang Yao, Jingru Tan, Gang Zhang, Fengwei Yu, Jianwei Lu,
Ye Luo
- Abstract summary: One-stage detectors are more prevalent in the industry because they have a simple and fast pipeline that is easy to deploy.
In this paper, we investigate whether one-stage detectors can perform well in the long-tailed scenario.
We propose the Equalized Focal Loss (EFL) that rebalances the loss contribution of positive and negative samples.
- Score: 17.89136305755172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent success of long-tailed object detection, almost all
long-tailed object detectors are developed based on the two-stage paradigm. In
practice, one-stage detectors are more prevalent in the industry because they
have a simple and fast pipeline that is easy to deploy. However, in the
long-tailed scenario, this line of work has not been explored so far. In this
paper, we investigate whether one-stage detectors can perform well in this
case. We discover the primary obstacle that prevents one-stage detectors from
achieving excellent performance is: categories suffer from different degrees of
positive-negative imbalance problems under the long-tailed data distribution.
The conventional focal loss balances the training process with the same
modulating factor for all categories, thus failing to handle the long-tailed
problem. To address this issue, we propose the Equalized Focal Loss (EFL) that
rebalances the loss contribution of positive and negative samples of different
categories independently according to their imbalance degrees. Specifically,
EFL adopts a category-relevant modulating factor which can be adjusted
dynamically by the training status of different categories. Extensive
experiments conducted on the challenging LVIS v1 benchmark demonstrate the
effectiveness of our proposed method. With an end-to-end training pipeline, EFL
achieves 29.2% in terms of overall AP and obtains significant performance
improvements on rare categories, surpassing all existing state-of-the-art
methods. The code is available at https://github.com/ModelTC/EOD.
Related papers
- Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - The Equalization Losses: Gradient-Driven Training for Long-tailed Object
Recognition [84.51875325962061]
We propose a gradient-driven training mechanism to tackle the long-tail problem.
We introduce a new family of gradient-driven loss functions, namely equalization losses.
Our method consistently outperforms the baseline models.
arXiv Detail & Related papers (2022-10-11T16:00:36Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - You Only Need End-to-End Training for Long-Tailed Recognition [8.789819609485225]
Cross-entropy loss tends to produce highly correlated features on imbalanced data.
We propose two novel modules, Block-based Relatively Balanced Batch Sampler (B3RS) and Batch Embedded Training (BET)
Experimental results on the long-tailed classification benchmarks, CIFAR-LT and ImageNet-LT, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-12-11T11:44:09Z) - Exploring Classification Equilibrium in Long-Tailed Object Detection [29.069986049436157]
We propose to use the mean classification score to indicate the classification accuracy for each category during training.
We balance the classification via an Equilibrium Loss (EBL) and a Memory-augmented Feature Sampling (MFS) method.
It improves the detection performance of tail classes by 15.6 AP, and outperforms the most recent long-tailed object detectors by more than 1 AP.
arXiv Detail & Related papers (2021-08-17T08:39:04Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.