Equalization Loss v2: A New Gradient Balance Approach for Long-tailed
Object Detection
- URL: http://arxiv.org/abs/2012.08548v2
- Date: Thu, 1 Apr 2021 02:35:22 GMT
- Title: Equalization Loss v2: A New Gradient Balance Approach for Long-tailed
Object Detection
- Authors: Jingru Tan, Xin Lu, Gang Zhang, Changqing Yin, Quanquan Li
- Abstract summary: Recently proposed decoupled training methods emerge as a dominant paradigm for long-tailed object detection.
End-to-end training methods, like equalization loss (EQL), still perform worse than decoupled training methods.
EQL v2 is a novel gradient guided reweighing mechanism that re-balances the training process for each category independently and equally.
- Score: 12.408265499394089
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently proposed decoupled training methods emerge as a dominant paradigm
for long-tailed object detection. But they require an extra fine-tuning stage,
and the disjointed optimization of representation and classifier might lead to
suboptimal results. However, end-to-end training methods, like equalization
loss (EQL), still perform worse than decoupled training methods. In this paper,
we reveal the main issue in long-tailed object detection is the imbalanced
gradients between positives and negatives, and find that EQL does not solve it
well. To address the problem of imbalanced gradients, we introduce a new
version of equalization loss, called equalization loss v2 (EQL v2), a novel
gradient guided reweighing mechanism that re-balances the training process for
each category independently and equally. Extensive experiments are performed on
the challenging LVIS benchmark. EQL v2 outperforms origin EQL by about 4 points
overall AP with 14-18 points improvements on the rare categories. More
importantly, it also surpasses decoupled training methods. Without further
tuning for the Open Images dataset, EQL v2 improves EQL by 7.3 points AP,
showing strong generalization ability. Codes have been released at
https://github.com/tztztztztz/eqlv2
Related papers
- AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization [9.050431569438636]
Implicit Q-learning serves as a strong baseline for offline RL.
We introduce a different way to solve the implicit policy-finding problem (IPF) by formulating this problem as an optimization problem.
Compared with IQL and IDQL, we find our method keeps the simplicity of IQL and solves the implicit policy-finding problem.
arXiv Detail & Related papers (2024-05-28T14:01:03Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - The Equalization Losses: Gradient-Driven Training for Long-tailed Object
Recognition [84.51875325962061]
We propose a gradient-driven training mechanism to tackle the long-tail problem.
We introduce a new family of gradient-driven loss functions, namely equalization losses.
Our method consistently outperforms the baseline models.
arXiv Detail & Related papers (2022-10-11T16:00:36Z) - Equalized Focal Loss for Dense Long-Tailed Object Detection [17.89136305755172]
One-stage detectors are more prevalent in the industry because they have a simple and fast pipeline that is easy to deploy.
In this paper, we investigate whether one-stage detectors can perform well in the long-tailed scenario.
We propose the Equalized Focal Loss (EFL) that rebalances the loss contribution of positive and negative samples.
arXiv Detail & Related papers (2022-01-07T18:35:58Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Contrast and Classify: Training Robust VQA Models [60.80627814762071]
We propose a novel training paradigm (ConClaT) that optimize both cross-entropy and contrastive losses.
We find that optimizing both losses -- either alternately or jointly -- is key to effective training.
arXiv Detail & Related papers (2020-10-13T00:23:59Z) - Single-partition adaptive Q-learning [0.0]
Single- Partition adaptive Q-learning (SPAQL) is an algorithm for model-free episodic reinforcement learning.
Tests on episodes with a large number of time steps show that SPAQL has no problems scaling, unlike adaptive Q-learning (AQL)
We claim that SPAQL may have a higher sample efficiency than AQL, thus being a relevant contribution to the field of efficient model-free RL methods.
arXiv Detail & Related papers (2020-07-14T00:03:25Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.