Seesaw Loss for Long-Tailed Instance Segmentation
- URL: http://arxiv.org/abs/2008.10032v4
- Date: Thu, 17 Jun 2021 15:13:10 GMT
- Title: Seesaw Loss for Long-Tailed Instance Segmentation
- Authors: Jiaqi Wang, Wenwei Zhang, Yuhang Zang, Yuhang Cao, Jiangmiao Pang, Tao
Gong, Kai Chen, Ziwei Liu, Chen Change Loy, Dahua Lin
- Abstract summary: We propose Seesaw Loss to dynamically re-balance gradients of positive and negative samples for each category.
The mitigation factor reduces punishments to tail categories w.r.t. the ratio of cumulative training instances between different categories.
The compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories.
- Score: 131.86306953253816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance segmentation has witnessed a remarkable progress on class-balanced
benchmarks. However, they fail to perform as accurately in real-world
scenarios, where the category distribution of objects naturally comes with a
long tail. Instances of head classes dominate a long-tailed dataset and they
serve as negative samples of tail categories. The overwhelming gradients of
negative samples on tail classes lead to a biased learning process for
classifiers. Consequently, objects of tail categories are more likely to be
misclassified as backgrounds or head categories. To tackle this problem, we
propose Seesaw Loss to dynamically re-balance gradients of positive and
negative samples for each category, with two complementary factors, i.e.,
mitigation factor and compensation factor. The mitigation factor reduces
punishments to tail categories w.r.t. the ratio of cumulative training
instances between different categories. Meanwhile, the compensation factor
increases the penalty of misclassified instances to avoid false positives of
tail categories. We conduct extensive experiments on Seesaw Loss with
mainstream frameworks and different data sampling strategies. With a simple
end-to-end training pipeline, Seesaw Loss obtains significant gains over
Cross-Entropy Loss, and achieves state-of-the-art performance on LVIS dataset
without bells and whistles. Code is available at
https://github.com/open-mmlab/mmdetection.
Related papers
- Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - The Equalization Losses: Gradient-Driven Training for Long-tailed Object
Recognition [84.51875325962061]
We propose a gradient-driven training mechanism to tackle the long-tail problem.
We introduce a new family of gradient-driven loss functions, namely equalization losses.
Our method consistently outperforms the baseline models.
arXiv Detail & Related papers (2022-10-11T16:00:36Z) - Exploring Classification Equilibrium in Long-Tailed Object Detection [29.069986049436157]
We propose to use the mean classification score to indicate the classification accuracy for each category during training.
We balance the classification via an Equilibrium Loss (EBL) and a Memory-augmented Feature Sampling (MFS) method.
It improves the detection performance of tail classes by 15.6 AP, and outperforms the most recent long-tailed object detectors by more than 1 AP.
arXiv Detail & Related papers (2021-08-17T08:39:04Z) - DropLoss for Long-Tail Instance Segmentation [56.162929199998075]
We develop DropLoss, a novel adaptive loss to compensate for the imbalance between rare and frequent categories.
We show state-of-the-art mAP across rare, common, and frequent categories on the LVIS dataset.
arXiv Detail & Related papers (2021-04-13T17:59:22Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.