Seed Phenotyping on Neural Networks using Domain Randomization and
Transfer Learning
- URL: http://arxiv.org/abs/2012.13259v1
- Date: Thu, 24 Dec 2020 14:04:28 GMT
- Title: Seed Phenotyping on Neural Networks using Domain Randomization and
Transfer Learning
- Authors: Venkat Margapuri and Mitchell Neilsen
- Abstract summary: Seed phenotyping is the idea of analyzing the morphometric characteristics of a seed to predict its behavior in terms of development, tolerance and yield.
The focus of the work is the application and feasibility analysis of the state-of-the-art object detection and localization networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Seed phenotyping is the idea of analyzing the morphometric characteristics of
a seed to predict the behavior of the seed in terms of development, tolerance
and yield in various environmental conditions. The focus of the work is the
application and feasibility analysis of the state-of-the-art object detection
and localization neural networks, Mask R-CNN and YOLO (You Only Look Once), for
seed phenotyping using Tensorflow. One of the major bottlenecks of such an
endeavor is the need for large amounts of training data. While the capture of a
multitude of seed images is taunting, the images are also required to be
annotated to indicate the boundaries of the seeds on the image and converted to
data formats that the neural networks are able to consume. Although tools to
manually perform the task of annotation are available for free, the amount of
time required is enormous. In order to tackle such a scenario, the idea of
domain randomization i.e. the technique of applying models trained on images
containing simulated objects to real-world objects, is considered. In addition,
transfer learning i.e. the idea of applying the knowledge obtained while
solving a problem to a different problem, is used. The networks are trained on
pre-trained weights from the popular ImageNet and COCO data sets. As part of
the work, experiments with different parameters are conducted on five different
seed types namely, canola, rough rice, sorghum, soy, and wheat.
Related papers
- Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa [1.9089478605920305]
This work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers.
All of the few-shot learning code, data, and results are made publicly available.
arXiv Detail & Related papers (2023-01-24T23:40:01Z) - Inside Out: Transforming Images of Lab-Grown Plants for Machine Learning
Applications in Agriculture [0.0]
We employ a contrastive unpaired translation (CUT) generative adversarial network (GAN) to translate indoor plant images to appear as field images.
While we train our network to translate an image containing only a single plant, we show that our method is easily extendable to produce multiple-plant field images.
We also use our synthetic multi-plant images to train several YoloV5 nano object detection models to perform the task of plant detection.
arXiv Detail & Related papers (2022-11-05T20:51:45Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - TransformNet: Self-supervised representation learning through predicting
geometric transformations [0.8098097078441623]
We describe the unsupervised semantic feature learning approach for recognition of the geometric transformation applied to the input data.
The basic concept of our approach is that if someone is unaware of the objects in the images, he/she would not be able to quantitatively predict the geometric transformation that was applied to them.
arXiv Detail & Related papers (2022-02-08T22:41:01Z) - A Comprehensive Study of Image Classification Model Sensitivity to
Foregrounds, Backgrounds, and Visual Attributes [58.633364000258645]
We call this dataset RIVAL10 consisting of roughly $26k$ instances over $10$ classes.
We evaluate the sensitivity of a broad set of models to noise corruptions in foregrounds, backgrounds and attributes.
In our analysis, we consider diverse state-of-the-art architectures (ResNets, Transformers) and training procedures (CLIP, SimCLR, DeiT, Adversarial Training)
arXiv Detail & Related papers (2022-01-26T06:31:28Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z) - Classification of Seeds using Domain Randomization on Self-Supervised
Learning Frameworks [0.0]
Key bottleneck is the need for an extensive amount of labelled data to train the convolutional neural networks (CNN)
The work leverages the concepts of Contrastive Learning and Domain Randomi-zation in order to achieve the same.
The use of synthetic images generated from a representational sample crop of real-world images alleviates the need for a large volume of test subjects.
arXiv Detail & Related papers (2021-03-29T12:50:06Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.