Inside Out: Transforming Images of Lab-Grown Plants for Machine Learning
Applications in Agriculture
- URL: http://arxiv.org/abs/2211.02972v1
- Date: Sat, 5 Nov 2022 20:51:45 GMT
- Title: Inside Out: Transforming Images of Lab-Grown Plants for Machine Learning
Applications in Agriculture
- Authors: A. E. Krosney, P. Sotoodeh, C. J. Henry, M. A. Beck, C. P. Bidinosti
- Abstract summary: We employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) to translate indoor plant images to appear as field images.
While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images.
We also use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning tasks often require a significant amount of training data
for the resultant network to perform suitably for a given problem in any
domain. In agriculture, dataset sizes are further limited by phenotypical
differences between two plants of the same genotype, often as a result of
differing growing conditions. Synthetically-augmented datasets have shown
promise in improving existing models when real data is not available. In this
paper, we employ a contrastive unpaired translation (CUT) generative
adversarial network (GAN) and simple image processing techniques to translate
indoor plant images to appear as field images. While we train our network to
translate an image containing only a single plant, we show that our method is
easily extendable to produce multiple-plant field images. Furthermore, we use
our synthetic multi-plant images to train several YoloV5 nano object detection
models to perform the task of plant detection and measure the accuracy of the
model on real field data images. Including training data generated by the
CUT-GAN leads to better plant detection performance compared to a network
trained solely on real data.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Agricultural Plant Cataloging and Establishment of a Data Framework from
UAV-based Crop Images by Computer Vision [4.0382342610484425]
We present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs.
The presented approach improves analysis and interpretation of UAV data in agriculture significantly.
arXiv Detail & Related papers (2022-01-08T21:14:07Z) - A Deep Learning Generative Model Approach for Image Synthesis of Plant
Leaves [62.997667081978825]
We generate via advanced Deep Learning (DL) techniques artificial leaf images in an automatized way.
We aim to dispose of a source of training samples for AI applications for modern crop management.
arXiv Detail & Related papers (2021-11-05T10:53:35Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Enlisting 3D Crop Models and GANs for More Data Efficient and
Generalizable Fruit Detection [0.0]
We propose a method that generates agricultural images from a synthetic 3D crop model domain into real world crop domains.
The method uses a semantically constrained GAN (generative adversarial network) to preserve the fruit position and geometry.
Incremental training experiments in vineyard grape detection tasks show that the images generated from our method can significantly speed the domain process.
arXiv Detail & Related papers (2021-08-30T16:11:59Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - Seed Phenotyping on Neural Networks using Domain Randomization and
Transfer Learning [0.0]
Seed phenotyping is the idea of analyzing the morphometric characteristics of a seed to predict its behavior in terms of development, tolerance and yield.
The focus of the work is the application and feasibility analysis of the state-of-the-art object detection and localization networks.
arXiv Detail & Related papers (2020-12-24T14:04:28Z) - Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision
Farming [3.4788711710826083]
We propose an alternative solution with respect to the common data augmentation methods, applying it to the problem of crop/weed segmentation in precision farming.
We create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts.
In addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images.
arXiv Detail & Related papers (2020-09-12T08:49:36Z) - An embedded system for the automated generation of labeled plant images
to enable machine learning applications in agriculture [1.4598479819593448]
A lack of sufficient training data is often the bottleneck in the development of machine learning (ML) applications.
We have developed an embedded robotic system to automatically generate and label large datasets of plant images.
We generated a dataset of over 34,000 labeled images, with which we trained an ML-model to distinguish grasses from non-grasses.
arXiv Detail & Related papers (2020-06-01T20:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.