A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers
- URL: http://arxiv.org/abs/2003.11154v3
- Date: Wed, 3 Nov 2021 01:15:41 GMT
- Title: A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers
- Authors: Saeed Anwar, Nick Barnes and Lars Petersson
- Abstract summary: We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
- Score: 54.996358399108566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To make the best use of the underlying minute and subtle differences,
fine-grained classifiers collect information about inter-class variations. The
task is very challenging due to the small differences between the colors,
viewpoint, and structure in the same class entities. The classification becomes
more difficult due to the similarities between the differences in viewpoint
with other classes and differences with its own. In this work, we investigate
the performance of the landmark general CNN classifiers, which presented
top-notch results on large scale classification datasets, on the fine-grained
datasets, and compare it against state-of-the-art fine-grained classifiers. In
this paper, we pose two specific questions: (i) Do the general CNN classifiers
achieve comparable results to fine-grained classifiers? (ii) Do general CNN
classifiers require any specific information to improve upon the fine-grained
ones? Throughout this work, we train the general CNN classifiers without
introducing any aspect that is specific to fine-grained datasets. We show an
extensive evaluation on six datasets to determine whether the fine-grained
classifier is able to elevate the baseline in their experiments.
Related papers
- Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Understanding CNN Fragility When Learning With Imbalanced Data [1.1444576186559485]
Convolutional neural networks (CNNs) have achieved impressive results on imbalanced image data, but they still have difficulty generalizing to minority classes.
We focus on their latent features to demystify CNN decisions on imbalanced data.
We show that important information regarding the ability of a neural network to generalize to minority classes resides in the class top-K CE and FE.
arXiv Detail & Related papers (2022-10-17T22:40:06Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Learning-From-Disagreement: A Model Comparison and Visual Analytics
Framework [21.055845469999532]
We propose a learning-from-disagreement framework to visually compare two classification models.
Specifically, we train a discriminator to learn from the disagreed instances.
We interpret the trained discriminator with the SHAP values of different meta-features.
arXiv Detail & Related papers (2022-01-19T20:15:35Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Exploring the Interchangeability of CNN Embedding Spaces [0.5735035463793008]
We map between 10 image-classification CNNs and between 4 facial-recognition CNNs.
For CNNs trained to the same classes and sharing a common backend-logit architecture, a linear-mapping may always be calculated directly from the backend layer weights.
The implications are far-reaching, suggesting an underlying commonality between representations learned by networks designed and trained for a common task.
arXiv Detail & Related papers (2020-10-05T20:32:40Z) - Multilayer Dense Connections for Hierarchical Concept Classification [3.6093339545734886]
We propose a multilayer dense connectivity for concurrent prediction of category and its conceptual superclasses in hierarchical order by the same CNN.
We experimentally demonstrate that our proposed network can simultaneously predict both the coarse superclasses and finer categories better than several existing algorithms in multiple datasets.
arXiv Detail & Related papers (2020-03-19T20:56:09Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.