Adaptive Class Suppression Loss for Long-Tail Object Detection
- URL: http://arxiv.org/abs/2104.00885v1
- Date: Fri, 2 Apr 2021 05:12:31 GMT
- Title: Adaptive Class Suppression Loss for Long-Tail Object Detection
- Authors: Tong Wang, Yousong Zhu, Chaoyang Zhao, Wei Zeng, Jinqiao Wang and Ming
Tang
- Abstract summary: We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
- Score: 49.7273558444966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address the problem of long-tail distribution for the large vocabulary
object detection task, existing methods usually divide the whole categories
into several groups and treat each group with different strategies. These
methods bring the following two problems. One is the training inconsistency
between adjacent categories of similar sizes, and the other is that the learned
model is lack of discrimination for tail categories which are semantically
similar to some of the head categories. In this paper, we devise a novel
Adaptive Class Suppression Loss (ACSL) to effectively tackle the above problems
and improve the detection performance of tail categories. Specifically, we
introduce a statistic-free perspective to analyze the long-tail distribution,
breaking the limitation of manual grouping. According to this perspective, our
ACSL adjusts the suppression gradients for each sample of each class
adaptively, ensuring the training consistency and boosting the discrimination
for rare categories. Extensive experiments on long-tail datasets LVIS and Open
Images show that the our ACSL achieves 5.18% and 5.2% improvements with
ResNet50-FPN, and sets a new state of the art. Code and models are available at
https://github.com/CASIA-IVA-Lab/ACSL.
Related papers
- Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot
Object Detection [39.937724871284665]
Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data.
Existing approaches enhance few-shot generalization with the sacrifice of base-class performance.
We propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness.
arXiv Detail & Related papers (2023-03-16T22:37:09Z) - PatchMix Augmentation to Identify Causal Features in Few-shot Learning [55.64873998196191]
Few-shot learning aims to transfer knowledge learned from base with sufficient categories labelled data to novel categories with scarce known information.
We propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency.
We show that such an augmentation mechanism, different from existing ones, is able to identify the causal features.
arXiv Detail & Related papers (2022-11-29T08:41:29Z) - Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning [20.66927648806676]
We propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL)
AEL balances the training of well and badly performed categories, with a confidence bank to track category-wise performance.
AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks.
arXiv Detail & Related papers (2021-10-11T17:59:55Z) - Exploring Classification Equilibrium in Long-Tailed Object Detection [29.069986049436157]
We propose to use the mean classification score to indicate the classification accuracy for each category during training.
We balance the classification via an Equilibrium Loss (EBL) and a Memory-augmented Feature Sampling (MFS) method.
It improves the detection performance of tail classes by 15.6 AP, and outperforms the most recent long-tailed object detectors by more than 1 AP.
arXiv Detail & Related papers (2021-08-17T08:39:04Z) - Seesaw Loss for Long-Tailed Instance Segmentation [131.86306953253816]
We propose Seesaw Loss to dynamically re-balance gradients of positive and negative samples for each category.
The mitigation factor reduces punishments to tail categories w.r.t. the ratio of cumulative training instances between different categories.
The compensation factor increases the penalty of misclassified instances to avoid false positives of tail categories.
arXiv Detail & Related papers (2020-08-23T12:44:45Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.