DropLoss for Long-Tail Instance Segmentation
- URL: http://arxiv.org/abs/2104.06402v1
- Date: Tue, 13 Apr 2021 17:59:22 GMT
- Title: DropLoss for Long-Tail Instance Segmentation
- Authors: Ting-I Hsieh, Esther Robb, Hwann-Tzong Chen, Jia-Bin Huang
- Abstract summary: We develop DropLoss, a novel adaptive loss to compensate for the imbalance between rare and frequent categories.
We show state-of-the-art mAP across rare, common, and frequent categories on the LVIS dataset.
- Score: 56.162929199998075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed class distributions are prevalent among the practical
applications of object detection and instance segmentation. Prior work in
long-tail instance segmentation addresses the imbalance of losses between rare
and frequent categories by reducing the penalty for a model incorrectly
predicting a rare class label. We demonstrate that the rare categories are
heavily suppressed by correct background predictions, which reduces the
probability for all foreground categories with equal weight. Due to the
relative infrequency of rare categories, this leads to an imbalance that biases
towards predicting more frequent categories. Based on this insight, we develop
DropLoss -- a novel adaptive loss to compensate for this imbalance without a
trade-off between rare and frequent categories. With this loss, we show
state-of-the-art mAP across rare, common, and frequent categories on the LVIS
dataset.
Related papers
- Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - Shift Happens: Adjusting Classifiers [2.8682942808330703]
Minimizing expected loss measured by a proper scoring rule, such as Brier score or log-loss (cross-entropy), is a common objective while training a probabilistic classifier.
We propose methods that transform all predictions to (re)equalize the average prediction and the class distribution.
We demonstrate experimentally that, when in practice the class distribution is known only approximately, there is often still a reduction in loss depending on the amount of shift and the precision to which the class distribution is known.
arXiv Detail & Related papers (2021-11-03T21:27:27Z) - On Clustering Categories of Categorical Predictors in Generalized Linear
Models [0.0]
We propose a method to reduce the complexity of Generalized Linear Models in the presence of categorical predictors.
The traditional one-hot encoding, where each category is represented by a dummy variable, can be wasteful, difficult to interpret, and prone to overfitting.
This paper addresses these challenges by finding a reduced representation of the categorical predictors by clustering their categories.
arXiv Detail & Related papers (2021-10-19T15:36:35Z) - Investigate the Essence of Long-Tailed Recognition from a Unified
Perspective [11.080317683184363]
deep recognition models often suffer from long-tailed data distributions due to heavy imbalanced sample number across categories.
In this work, we demonstrate that long-tailed recognition suffers from both sample number and category similarity.
arXiv Detail & Related papers (2021-07-08T11:08:40Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z) - Seesaw Loss for Long-Tailed Instance Segmentation [131.86306953253816]
We propose Seesaw Loss to dynamically re-balance gradients of positive and negative samples for each category.
The mitigation factor reduces punishments to tail categories w.r.t. the ratio of cumulative training instances between different categories.
The compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories.
arXiv Detail & Related papers (2020-08-23T12:44:45Z) - Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed
Datasets [98.74153364118898]
We present a new loss function called Distribution-Balanced Loss for the multi-label recognition problems that exhibit long-tailed class distributions.
The Distribution-Balanced Loss tackles these issues through two key modifications to the standard binary cross-entropy loss.
Experiments on both Pascal VOC and COCO show that the models trained with this new loss function achieve significant performance gains.
arXiv Detail & Related papers (2020-07-19T11:50:10Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.