Classification of Seeds using Domain Randomization on Self-Supervised
Learning Frameworks
- URL: http://arxiv.org/abs/2103.15578v1
- Date: Mon, 29 Mar 2021 12:50:06 GMT
- Title: Classification of Seeds using Domain Randomization on Self-Supervised
Learning Frameworks
- Authors: Venkat Margapuri and Mitchell Neilsen
- Abstract summary: Key bottleneck is the need for an extensive amount of labelled data to train the convolutional neural networks (CNN)
The work leverages the concepts of Contrastive Learning and Domain Randomi-zation in order to achieve the same.
The use of synthetic images generated from a representational sample crop of real-world images alleviates the need for a large volume of test subjects.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The first step toward Seed Phenotyping i.e. the comprehensive assessment of
complex seed traits such as growth, development, tolerance, resistance,
ecology, yield, and the measurement of pa-rameters that form more complex
traits is the identification of seed type. Generally, a plant re-searcher
inspects the visual attributes of a seed such as size, shape, area, color and
texture to identify the seed type, a process that is tedious and
labor-intensive. Advances in the areas of computer vision and deep learning
have led to the development of convolutional neural networks (CNN) that aid in
classification using images. While they classify efficiently, a key bottleneck
is the need for an extensive amount of labelled data to train the CNN before it
can be put to the task of classification. The work leverages the concepts of
Contrastive Learning and Domain Randomi-zation in order to achieve the same.
Briefly, domain randomization is the technique of applying models trained on
images containing simulated objects to real-world objects. The use of synthetic
images generated from a representational sample crop of real-world images
alleviates the need for a large volume of test subjects. As part of the work,
synthetic image datasets of five different types of seed images namely, canola,
rough rice, sorghum, soy and wheat are applied to three different
self-supervised learning frameworks namely, SimCLR, Momentum Contrast (MoCo)
and Build Your Own Latent (BYOL) where ResNet-50 is used as the backbone in
each of the networks. When the self-supervised models are fine-tuned with only
5% of the labels from the synthetic dataset, results show that MoCo, the model
that yields the best performance of the self-supervised learning frameworks in
question, achieves an accuracy of 77% on the test dataset which is only ~13%
less than the accuracy of 90% achieved by ResNet-50 trained on 100% of the
labels.
Related papers
- Local-to-Global Self-Supervised Representation Learning for Diabetic Retinopathy Grading [0.0]
This research aims to present a novel hybrid learning model using self-supervised learning and knowledge distillation.
In our algorithm, for the first time among all self-supervised learning and knowledge distillation models, the test dataset is 50% larger than the training dataset.
Compared to a similar state-of-the-art model, our results achieved higher accuracy and more effective representation spaces.
arXiv Detail & Related papers (2024-10-01T15:19:16Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa [1.9089478605920305]
This work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers.
All of the few-shot learning code, data, and results are made publicly available.
arXiv Detail & Related papers (2023-01-24T23:40:01Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - A Comprehensive Study of Image Classification Model Sensitivity to
Foregrounds, Backgrounds, and Visual Attributes [58.633364000258645]
We call this dataset RIVAL10 consisting of roughly $26k$ instances over $10$ classes.
We evaluate the sensitivity of a broad set of models to noise corruptions in foregrounds, backgrounds and attributes.
In our analysis, we consider diverse state-of-the-art architectures (ResNets, Transformers) and training procedures (CLIP, SimCLR, DeiT, Adversarial Training)
arXiv Detail & Related papers (2022-01-26T06:31:28Z) - Enlisting 3D Crop Models and GANs for More Data Efficient and
Generalizable Fruit Detection [0.0]
We propose a method that generates agricultural images from a synthetic 3D crop model domain into real world crop domains.
The method uses a semantically constrained GAN (generative adversarial network) to preserve the fruit position and geometry.
Incremental training experiments in vineyard grape detection tasks show that the images generated from our method can significantly speed the domain process.
arXiv Detail & Related papers (2021-08-30T16:11:59Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Seed Phenotyping on Neural Networks using Domain Randomization and
Transfer Learning [0.0]
Seed phenotyping is the idea of analyzing the morphometric characteristics of a seed to predict its behavior in terms of development, tolerance and yield.
The focus of the work is the application and feasibility analysis of the state-of-the-art object detection and localization networks.
arXiv Detail & Related papers (2020-12-24T14:04:28Z) - Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision
Farming [3.4788711710826083]
We propose an alternative solution with respect to the common data augmentation methods, applying it to the problem of crop/weed segmentation in precision farming.
We create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts.
In addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images.
arXiv Detail & Related papers (2020-09-12T08:49:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.