Deep transfer learning for image classification: a survey
- URL: http://arxiv.org/abs/2205.09904v1
- Date: Fri, 20 May 2022 00:03:39 GMT
- Title: Deep transfer learning for image classification: a survey
- Authors: Jo Plested, Tom Gedeon
- Abstract summary: Best practice for image classification is when large deep models can be trained on abundant labelled data.
In these scenarios transfer learning can help improve performance.
We present a new taxonomy of the applications of transfer learning for image classification.
- Score: 4.590533239391236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks such as convolutional neural networks (CNNs) and
transformers have achieved many successes in image classification in recent
years. It has been consistently demonstrated that best practice for image
classification is when large deep models can be trained on abundant labelled
data. However there are many real world scenarios where the requirement for
large amounts of training data to get the best performance cannot be met. In
these scenarios transfer learning can help improve performance. To date there
have been no surveys that comprehensively review deep transfer learning as it
relates to image classification overall. However, several recent general
surveys of deep transfer learning and ones that relate to particular
specialised target image classification tasks have been published. We believe
it is important for the future progress in the field that all current knowledge
is collated and the overarching patterns analysed and discussed. In this survey
we formally define deep transfer learning and the problem it attempts to solve
in relation to image classification. We survey the current state of the field
and identify where recent progress has been made. We show where the gaps in
current knowledge are and make suggestions for how to progress the field to
fill in these knowledge gaps. We present a new taxonomy of the applications of
transfer learning for image classification. This taxonomy makes it easier to
see overarching patterns of where transfer learning has been effective and,
where it has failed to fulfill its potential. This also allows us to suggest
where the problems lie and how it could be used more effectively. We show that
under this new taxonomy, many of the applications where transfer learning has
been shown to be ineffective or even hinder performance are to be expected when
taking into account the source and target datasets and the techniques used.
Related papers
- Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Genetic Programming-Based Evolutionary Deep Learning for Data-Efficient
Image Classification [3.9310727060473476]
This paper proposes a new genetic programming-based evolutionary deep learning approach to data-efficient image classification.
The new approach can automatically evolve variable-length models using many important operators from both image and classification domains.
A flexible multi-layer representation enables the new approach to automatically construct shallow or deep models/trees for different tasks.
arXiv Detail & Related papers (2022-09-27T08:10:16Z) - SATS: Self-Attention Transfer for Continual Semantic Segmentation [50.51525791240729]
continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning.
This study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements within each image.
The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model.
arXiv Detail & Related papers (2022-03-15T06:09:28Z) - Transfer of Pretrained Model Weights Substantially Improves
Semi-Supervised Image Classification [3.492636597449942]
Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples.
Deep neural networks tend to overfit when small amounts of labeled examples are used for training.
We show that transfer learning always substantially improves the model's accuracy when few labeled examples are available.
arXiv Detail & Related papers (2021-09-02T08:58:34Z) - Few-Shot Learning for Image Classification of Common Flora [0.0]
We will showcase our results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML)
Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance.
arXiv Detail & Related papers (2021-05-07T03:54:51Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - CrossTransformers: spatially-aware few-shot transfer [92.33252608837947]
Given new tasks with very little data, modern vision systems degrade remarkably quickly.
We show how the neural network representations which underpin modern vision systems are subject to supervision collapse.
We propose self-supervised learning to encourage general-purpose features that transfer better.
arXiv Detail & Related papers (2020-07-22T15:37:08Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Evaluating the Progress of Deep Learning for Visual Relational Concepts [0.6999740786886536]
We will show that difficult tasks are linked to relational concepts from cognitive psychology.
We will review research that is linked to relational concept learning, even if it was not originally presented from this angle.
We will recommend steps to make future datasets more relevant for testing systems on relational reasoning.
arXiv Detail & Related papers (2020-01-29T14:21:34Z) - Continual Local Replacement for Few-shot Learning [13.956960291580938]
The goal of few-shot learning is to learn a model that can recognize novel classes based on one or few training data.
It is challenging mainly due to two aspects: (1) it lacks good feature representation of novel classes; (2) a few of labeled data could not accurately represent the true data distribution.
A novel continual local replacement strategy is proposed to address the data deficiency problem.
arXiv Detail & Related papers (2020-01-23T04:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.