A Comparison of Deep Learning Classification Methods on Small-scale
Image Data set: from Converlutional Neural Networks to Visual Transformers
- URL: http://arxiv.org/abs/2107.07699v1
- Date: Fri, 16 Jul 2021 04:13:10 GMT
- Title: A Comparison of Deep Learning Classification Methods on Small-scale
Image Data set: from Converlutional Neural Networks to Visual Transformers
- Authors: Peng Zhao, Chen Li, Md Mamunur Rahaman, Hechen Yang, Tao Jiang and
Marcin Grzegorzek
- Abstract summary: This article explains the application and characteristics of convolutional neural networks and visual transformers.
A series of experiments are carried out on the small datasets by using various models.
The recommended deep learning model is given according to the model application environment.
- Score: 18.58928427116305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep learning has made brilliant achievements in image
classification. However, image classification of small datasets is still not
obtained good research results. This article first briefly explains the
application and characteristics of convolutional neural networks and visual
transformers. Meanwhile, the influence of small data set on classification and
the solution are introduced. Then a series of experiments are carried out on
the small datasets by using various models, and the problems of some models in
the experiments are discussed. Through the comparison of experimental results,
the recommended deep learning model is given according to the model application
environment. Finally, we give directions for future work.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - An evaluation of pre-trained models for feature extraction in image
classification [0.0]
This work aims to compare the performance of different pre-trained neural networks for feature extraction in image classification tasks.
Our results demonstrate that the best general performance along the datasets was achieved by CLIP-ViT-B and ViT-H-14, where the CLIP-ResNet50 model had similar performance but with less variability.
arXiv Detail & Related papers (2023-10-03T13:28:14Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Learning to Generate Synthetic Training Data using Gradient Matching and
Implicit Differentiation [77.34726150561087]
This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks.
Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem.
arXiv Detail & Related papers (2022-03-16T11:45:32Z) - A Comparison for Anti-noise Robustness of Deep Learning Classification
Methods on a Tiny Object Image Dataset: from Convolutional Neural Network to
Visual Transformer and Performer [27.023667473278266]
We first briefly review the development of Convolutional Neural Network and Visual Transformer in deep learning.
We then use various models of Convolutional Neural Network and Visual Transformer to conduct a series of experiments on the image dataset of tiny objects.
We discuss the problems in the classification of tiny objects and make a prospect for the classification of tiny objects in the future.
arXiv Detail & Related papers (2021-06-03T15:28:17Z) - Rethinking Natural Adversarial Examples for Classification Models [43.87819913022369]
ImageNet-A is a famous dataset of natural adversarial examples.
We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques.
Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models.
arXiv Detail & Related papers (2021-02-23T14:46:48Z) - PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks [3.4129083593356433]
Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
arXiv Detail & Related papers (2020-09-24T18:31:35Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Looking back to lower-level information in few-shot learning [4.873362301533825]
We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
arXiv Detail & Related papers (2020-05-27T20:32:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.