Facilitated machine learning for image-based fruit quality assessment in
developing countries
- URL: http://arxiv.org/abs/2207.04523v1
- Date: Sun, 10 Jul 2022 19:52:20 GMT
- Title: Facilitated machine learning for image-based fruit quality assessment in
developing countries
- Authors: Manuel Knott, Fernando Perez-Cruz, Thijs Defraeye
- Abstract summary: Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated image classification is a common task for supervised machine
learning in food science. An example is the image-based classification of the
fruit's external quality or ripeness. For this purpose, deep convolutional
neural networks (CNNs) are typically used. These models usually require a large
number of labeled training samples and enhanced computational resources. While
commercial fruit sorting lines readily meet these requirements, the use of
machine learning approaches can be hindered by these prerequisites, especially
for smallholder farmers in the developing world. We propose an alternative
method based on pre-trained vision transformers (ViTs) that is particularly
suitable for domains with low availability of data and limited computational
resources. It can be easily implemented with limited resources on a standard
device, which can democratize the use of these models for smartphone-based
image classification in developing countries. We demonstrate the
competitiveness of our method by benchmarking two different classification
tasks on domain data sets of banana and apple fruits with well-established CNN
approaches. Our method achieves a classification accuracy of less than one
percent below the best-performing CNN (0.950 vs. 0.958) on a training data set
of 3745 images. At the same time, our method is superior when only a small
number of labeled training samples is available. It requires three times less
data to achieve a 0.90 accuracy compared to CNNs. In addition, visualizations
of low-dimensional feature embeddings show that the model used in our study
extracts excellent features from unseen data without allocating labels.
Related papers
- Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - DINOv2: Learning Robust Visual Features without Supervision [75.42921276202522]
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
arXiv Detail & Related papers (2023-04-14T15:12:19Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - KNN-Diffusion: Image Generation via Large-Scale Retrieval [40.6656651653888]
Learning to adapt enables several new capabilities.
Fine-tuning trained models to new samples can be achieved by simply adding them to the table.
Our diffusion-based model trains on images only, by leveraging a joint Text-Image multi-modal metric.
arXiv Detail & Related papers (2022-04-06T14:13:35Z) - Enlisting 3D Crop Models and GANs for More Data Efficient and
Generalizable Fruit Detection [0.0]
We propose a method that generates agricultural images from a synthetic 3D crop model domain into real world crop domains.
The method uses a semantically constrained GAN (generative adversarial network) to preserve the fruit position and geometry.
Incremental training experiments in vineyard grape detection tasks show that the images generated from our method can significantly speed the domain process.
arXiv Detail & Related papers (2021-08-30T16:11:59Z) - Few-Shot Learning for Image Classification of Common Flora [0.0]
We will showcase our results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML)
Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance.
arXiv Detail & Related papers (2021-05-07T03:54:51Z) - Application of Facial Recognition using Convolutional Neural Networks
for Entry Access Control [0.0]
The paper focuses on solving the supervised classification problem of taking images of people as input and classifying the person in the image as one of the authors or not.
Two approaches are proposed: (1) building and training a neural network called WoodNet from scratch and (2) leveraging transfer learning by utilizing a network pre-trained on the ImageNet database.
The results are two models classifying the individuals in the dataset with high accuracy, achieving over 99% accuracy on held-out test data.
arXiv Detail & Related papers (2020-11-23T07:55:24Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - DEAL: Deep Evidential Active Learning for Image Classification [0.0]
Active Learning (AL) is one approach to mitigate the problem of limited labeled data.
Recent AL methods for CNNs propose different solutions for the selection of instances to be labeled.
We propose a novel AL algorithm that efficiently learns from unlabeled data by capturing high prediction uncertainty.
arXiv Detail & Related papers (2020-07-22T11:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.