Few-Shot Learning for Image Classification of Common Flora
- URL: http://arxiv.org/abs/2105.03056v1
- Date: Fri, 7 May 2021 03:54:51 GMT
- Title: Few-Shot Learning for Image Classification of Common Flora
- Authors: Joshua Ball
- Abstract summary: We will showcase our results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML)
Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of meta-learning and transfer learning in the task of few-shot image
classification is a well researched area with many papers showcasing the
advantages of transfer learning over meta-learning in cases where data is
plentiful and there is no major limitations to computational resources. In this
paper we will showcase our experimental results from testing various
state-of-the-art transfer learning weights and architectures versus similar
state-of-the-art works in the meta-learning field for image classification
utilizing Model-Agnostic Meta Learning (MAML). Our results show that both
practices provide adequate performance when the dataset is sufficiently large,
but that they both also struggle when data sparsity is introduced to maintain
sufficient performance. This problem is moderately reduced with the use of
image augmentation and the fine-tuning of hyperparameters. In this paper we
will discuss: (1) our process of developing a robust multi-class convolutional
neural network (CNN) for the task of few-shot image classification, (2)
demonstrate that transfer learning is the superior method of helping create an
image classification model when the dataset is large and (3) that MAML
outperforms transfer learning in the case where data is very limited. The code
is available here: github.com/JBall1/Few-Shot-Limited-Data
Related papers
- Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Mixture of Self-Supervised Learning [2.191505742658975]
Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task.
Previous studies have only used one type of transformation as a pretext task.
This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks.
arXiv Detail & Related papers (2023-07-27T14:38:32Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - Agricultural Plantation Classification using Transfer Learning Approach
based on CNN [0.0]
The efficiency of hyper-spectral image recognition has increased significantly with deep learning.
CNN and Multi-Layer Perceptron(MLP) has demonstrated to be an excellent process of classifying images.
We propose using the method of transfer learning to decrease the training time and reduce the dependence on large labeled data-set.
arXiv Detail & Related papers (2022-06-19T14:43:31Z) - Terrain Classification using Transfer Learning on Hyperspectral Images:
A Comparative study [0.13999481573773068]
convolutional neural network (CNN) and the Multi-Layer Perceptron (MLP) have been proven to be an effective method of image classification.
However, they suffer from the issues of long training time and requirement of large amounts of the labeled data.
We propose using the method of transfer learning to decrease the training time and reduce the dependence on large labeled dataset.
arXiv Detail & Related papers (2022-06-19T14:36:33Z) - Deep transfer learning for image classification: a survey [4.590533239391236]
Best practice for image classification is when large deep models can be trained on abundant labelled data.
In these scenarios transfer learning can help improve performance.
We present a new taxonomy of the applications of transfer learning for image classification.
arXiv Detail & Related papers (2022-05-20T00:03:39Z) - Memory Efficient Meta-Learning with Large Images [62.70515410249566]
Meta learning approaches to few-shot classification are computationally efficient at test time requiring just a few optimization steps or single forward pass to learn a new task.
This limitation arises because a task's entire support set, which can contain up to 1000 images, must be processed before an optimization step can be taken.
We propose LITE, a general and memory efficient episodic training scheme that enables meta-training on large tasks composed of large images on a single GPU.
arXiv Detail & Related papers (2021-07-02T14:37:13Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z) - Multi-task pre-training of deep neural networks for digital pathology [8.74883469030132]
We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images.
We show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance.
arXiv Detail & Related papers (2020-05-05T08:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.