Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance
Segmentation
- URL: http://arxiv.org/abs/2008.05676v2
- Date: Wed, 3 Mar 2021 04:51:39 GMT
- Title: Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance
Segmentation
- Authors: Jialian Wu, Liangchen Song, Tiancai Wang, Qian Zhang, Junsong Yuan
- Abstract summary: We exploit prior knowledge of the relations among object categories to cluster fine-grained classes into coarser parent classes.
We propose a simple yet effective resampling method, NMS Resampling, to re-balance the data distribution.
Our method, termed as Forest R-CNN, can serve as a plug-and-play module being applied to most object recognition models.
- Score: 75.93960390191262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the previous success of object analysis, detecting and segmenting a
large number of object categories with a long-tailed data distribution remains
a challenging problem and is less investigated. For a large-vocabulary
classifier, the chance of obtaining noisy logits is much higher, which can
easily lead to a wrong recognition. In this paper, we exploit prior knowledge
of the relations among object categories to cluster fine-grained classes into
coarser parent classes, and construct a classification tree that is responsible
for parsing an object instance into a fine-grained category via its parent
class. In the classification tree, as the number of parent class nodes are
significantly less, their logits are less noisy and can be utilized to suppress
the wrong/noisy logits existed in the fine-grained class nodes. As the way to
construct the parent class is not unique, we further build multiple trees to
form a classification forest where each tree contributes its vote to the
fine-grained classification. To alleviate the imbalanced learning caused by the
long-tail phenomena, we propose a simple yet effective resampling method, NMS
Resampling, to re-balance the data distribution. Our method, termed as Forest
R-CNN, can serve as a plug-and-play module being applied to most object
recognition models for recognizing more than 1000 categories. Extensive
experiments are performed on the large vocabulary dataset LVIS. Compared with
the Mask R-CNN baseline, the Forest R-CNN significantly boosts the performance
with 11.5% and 3.9% AP improvements on the rare categories and overall
categories, respectively. Moreover, we achieve state-of-the-art results on the
LVIS dataset. Code is available at https://github.com/JialianW/Forest_RCNN.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Early-Exit with Class Exclusion for Efficient Inference of Neural
Networks [4.180653524441411]
We propose a class-based early-exit for dynamic inference in deep neural networks (DNNs)
We take advantage of the learned features in these layers to exclude as many irrelevant classes as possible.
Experimental results demonstrate the computational cost of DNNs in inference can be reduced significantly.
arXiv Detail & Related papers (2023-09-23T18:12:27Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - A Topological Data Analysis Based Classifier [1.6668132748773563]
This paper proposes an algorithm that applies Topological Data Analysis directly to multi-class classification problems.
The proposed algorithm builds a filtered simplicial complex on the dataset.
On average, the proposed TDABC method was better than KNN and weighted-KNN.
arXiv Detail & Related papers (2021-11-09T15:54:16Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural
Networks [28.92347073786722]
Graph neural networks (GNNs) have achieved state-of-the-art performance of node classification.
We propose a novel framework, GraphSMOTE, in which an embedding space is constructed to encode the similarity among the nodes.
New samples are synthesize in this space to assure genuineness.
arXiv Detail & Related papers (2021-03-16T03:23:55Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.