SGNet: A Super-class Guided Network for Image Classification and Object
Detection
- URL: http://arxiv.org/abs/2104.12898v1
- Date: Mon, 26 Apr 2021 22:26:12 GMT
- Title: SGNet: A Super-class Guided Network for Image Classification and Object
Detection
- Authors: Kaidong Li, Nina Y. Wang, Yiju Yang and Guanghui Wang
- Abstract summary: The paper proposes a super-class guided network (SGNet) to integrate the high-level semantic information into the network.
The experimental results validate the proposed approach and demonstrate its superior performance on image classification and object detection.
- Score: 15.853822797338655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most classification models treat different object classes in parallel and the
misclassifications between any two classes are treated equally. In contrast,
human beings can exploit high-level information in making a prediction of an
unknown object. Inspired by this observation, the paper proposes a super-class
guided network (SGNet) to integrate the high-level semantic information into
the network so as to increase its performance in inference. SGNet takes
two-level class annotations that contain both super-class and finer class
labels. The super-classes are higher-level semantic categories that consist of
a certain amount of finer classes. A super-class branch (SCB), trained on
super-class labels, is introduced to guide finer class prediction. At the
inference time, we adopt two different strategies: Two-step inference (TSI) and
direct inference (DI). TSI first predicts the super-class and then makes
predictions of the corresponding finer class. On the other hand, DI directly
generates predictions from the finer class branch (FCB). Extensive experiments
have been performed on CIFAR-100 and MS COCO datasets. The experimental results
validate the proposed approach and demonstrate its superior performance on
image classification and object detection.
Related papers
- Instance-level Few-shot Learning with Class Hierarchy Mining [26.273796311012042]
We exploit hierarchical information to leverage discriminative and relevant features of base classes to effectively classify novel objects.
These features are extracted from abundant data of base classes, which could be utilized to reasonably describe classes with scarce data.
In order to effectively train the hierarchy-based-detector in FSIS, we apply the label refinement to further describe the associations between fine-grained classes.
arXiv Detail & Related papers (2023-04-15T02:55:08Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - GAN for Vision, KG for Relation: a Two-stage Deep Network for Zero-shot
Action Recognition [33.23662792742078]
We propose a two-stage deep neural network for zero-shot action recognition.
In the sampling stage, we utilize a generative adversarial networks (GAN) trained by action features and word vectors of seen classes.
In the classification stage, we construct a knowledge graph based on the relationship between word vectors of action classes and related objects.
arXiv Detail & Related papers (2021-05-25T09:34:42Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained
Classification Problems [20.253644336965042]
In real-world classification tasks, each class often comprises multiple finer-grained "subclasses"
As the subclass labels are frequently unavailable, models trained using only the coarser-grained class labels often exhibit highly variable performance across different subclasses.
We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown.
arXiv Detail & Related papers (2020-11-25T18:50:32Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Attribute Propagation Network for Graph Zero-shot Learning [57.68486382473194]
We introduce the attribute propagation network (APNet), which is composed of 1) a graph propagation model generating attribute vector for each class and 2) a parameterized nearest neighbor (NN) classifier.
APNet achieves either compelling performance or new state-of-the-art results in experiments with two zero-shot learning settings and five benchmark datasets.
arXiv Detail & Related papers (2020-09-24T16:53:40Z) - Many-Class Few-Shot Learning on Multi-Granularity Class Hierarchy [57.68486382473194]
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
In this paper, we leverage the class hierarchy as a prior knowledge to train a coarse-to-fine classifier.
The model, "memory-augmented hierarchical-classification network (MahiNet)", performs coarse-to-fine classification where each coarse class can cover multiple fine classes.
arXiv Detail & Related papers (2020-06-28T01:11:34Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.