Fruit Quality and Defect Image Classification with Conditional GAN Data
Augmentation
- URL: http://arxiv.org/abs/2104.05647v1
- Date: Mon, 12 Apr 2021 17:13:05 GMT
- Title: Fruit Quality and Defect Image Classification with Conditional GAN Data
Augmentation
- Authors: Jordan J. Bird, Chloe M. Barnes, Luis J. Manso, Anik\'o Ek\'art, Diego
R. Faria
- Abstract summary: We suggest a machine learning pipeline that combines the ideas of fine-tuning, transfer learning, and generative model-based training data augmentation.
We find that appending a 4096 neuron fully connected to the convolutional layers leads to an image classification accuracy of 83.77%.
We then train a Conditional Generative Adversarial Network on the training data for 2000 epochs, and it learns to generate relatively realistic images.
- Score: 2.6424021470496672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contemporary Artificial Intelligence technologies allow for the employment of
Computer Vision to discern good crops from bad, providing a step in the
pipeline of selecting healthy fruit from undesirable fruit, such as those which
are mouldy or gangrenous. State-of-the-art works in the field report high
accuracy results on small datasets (<1000 images), which are not representative
of the population regarding real-world usage. The goals of this study are to
further enable real-world usage by improving generalisation with data
augmentation as well as to reduce overfitting and energy usage through model
pruning. In this work, we suggest a machine learning pipeline that combines the
ideas of fine-tuning, transfer learning, and generative model-based training
data augmentation towards improving fruit quality image classification. A
linear network topology search is performed to tune a VGG16 lemon quality
classification model using a publicly-available dataset of 2690 images. We find
that appending a 4096 neuron fully connected layer to the convolutional layers
leads to an image classification accuracy of 83.77%. We then train a
Conditional Generative Adversarial Network on the training data for 2000
epochs, and it learns to generate relatively realistic images. Grad-CAM
analysis of the model trained on real photographs shows that the synthetic
images can exhibit classifiable characteristics such as shape, mould, and
gangrene. A higher image classification accuracy of 88.75% is then attained by
augmenting the training with synthetic images, arguing that Conditional
Generative Adversarial Networks have the ability to produce new data to
alleviate issues of data scarcity. Finally, model pruning is performed via
polynomial decay, where we find that the Conditional GAN-augmented
classification network can retain 81.16% classification accuracy when
compressed to 50% of its original size.
Related papers
- Comparative Analysis and Ensemble Enhancement of Leading CNN Architectures for Breast Cancer Classification [0.0]
This study introduces a novel and accurate approach to breast cancer classification using histopathology images.
It systematically compares leading Convolutional Neural Network (CNN) models across varying image datasets.
Our findings establish the settings required to achieve exceptional classification accuracy for standalone CNN models.
arXiv Detail & Related papers (2024-10-04T11:31:43Z) - Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification [0.12499537119440242]
Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases.
We show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.
arXiv Detail & Related papers (2024-09-24T12:02:55Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - CIFAKE: Image Classification and Explainable Identification of
AI-Generated Synthetic Images [7.868449549351487]
This article proposes to enhance our ability to recognise AI-generated images through computer vision.
The two sets of data present as a binary classification problem with regard to whether the photograph is real or generated by AI.
This study proposes the use of a Convolutional Neural Network (CNN) to classify the images into two categories; Real or Fake.
arXiv Detail & Related papers (2023-03-24T16:33:06Z) - Diffusion-based Data Augmentation for Skin Disease Classification:
Impact Across Original Medical Datasets to Fully Synthetic Images [2.5075774184834803]
Deep neural networks still rely on large amounts of training data to avoid overfitting.
Labeled training data for real-world applications such as healthcare is limited and difficult to access.
We build upon the emerging success of text-to-image diffusion probabilistic models in augmenting the training samples of our macroscopic skin disease dataset.
arXiv Detail & Related papers (2023-01-12T04:22:23Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Data Augmentation using Feature Generation for Volumetric Medical Images [0.08594140167290097]
Medical image classification is one of the most critical problems in the image recognition area.
One of the major challenges in this field is the scarcity of labelled training data.
Deep Learning models, in particular, show promising results on image segmentation and classification problems.
arXiv Detail & Related papers (2022-09-28T13:46:24Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.