Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa
- URL: http://arxiv.org/abs/2301.10351v3
- Date: Thu, 18 May 2023 12:10:54 GMT
- Title: Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa
- Authors: John Lagergren, Mirko Pavicic, Hari B. Chhetri, Larry M. York, P. Doug
Hyatt, David Kainer, Erica M. Rutter, Kevin Flores, Jack Bailey-Bale, Marie
Klein, Gail Taylor, Daniel Jacobson, Jared Streich
- Abstract summary: This work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers.
All of the few-shot learning code, data, and results are made publicly available.
- Score: 1.9089478605920305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plant phenotyping is typically a time-consuming and expensive endeavor,
requiring large groups of researchers to meticulously measure biologically
relevant plant traits, and is the main bottleneck in understanding plant
adaptation and the genetic architecture underlying complex traits at population
scale. In this work, we address these challenges by leveraging few-shot
learning with convolutional neural networks (CNNs) to segment the leaf body and
visible venation of 2,906 P. trichocarpa leaf images obtained in the field. In
contrast to previous methods, our approach (i) does not require experimental or
image pre-processing, (ii) uses the raw RGB images at full resolution, and
(iii) requires very few samples for training (e.g., just eight images for vein
segmentation). Traits relating to leaf morphology and vein topology are
extracted from the resulting segmentations using traditional open-source
image-processing tools, validated using real-world physical measurements, and
used to conduct a genome-wide association study to identify genes controlling
the traits. In this way, the current work is designed to provide the plant
phenotyping community with (i) methods for fast and accurate image-based
feature extraction that require minimal training data, and (ii) a new
population-scale data set, including 68 different leaf phenotypes, for domain
scientists and machine learning researchers. All of the few-shot learning code,
data, and results are made publicly available.
Related papers
- High-Throughput Phenotyping using Computer Vision and Machine Learning [0.0]
We used a dataset provided by Oak Ridge National Laboratory with 1,672 images of Populus Trichocarpa with white labels displaying treatment.
Optical character recognition (OCR) was used to read these labels on the plants.
Machine learning models were used to predict treatment based on those classifications, and analyzed encoded EXIF tags were used for the purpose of finding leaf size and correlations between phenotypes.
arXiv Detail & Related papers (2024-07-08T19:46:31Z) - Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle
Phenotypes [0.5076419064097732]
We present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images.
We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations.
arXiv Detail & Related papers (2023-01-21T16:25:04Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Classification of Seeds using Domain Randomization on Self-Supervised
Learning Frameworks [0.0]
Key bottleneck is the need for an extensive amount of labelled data to train the convolutional neural networks (CNN)
The work leverages the concepts of Contrastive Learning and Domain Randomi-zation in order to achieve the same.
The use of synthetic images generated from a representational sample crop of real-world images alleviates the need for a large volume of test subjects.
arXiv Detail & Related papers (2021-03-29T12:50:06Z) - Seed Phenotyping on Neural Networks using Domain Randomization and
Transfer Learning [0.0]
Seed phenotyping is the idea of analyzing the morphometric characteristics of a seed to predict its behavior in terms of development, tolerance and yield.
The focus of the work is the application and feasibility analysis of the state-of-the-art object detection and localization networks.
arXiv Detail & Related papers (2020-12-24T14:04:28Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z) - Scalable learning for bridging the species gap in image-based plant
phenotyping [2.208242292882514]
The traditional paradigm of applying deep learning -- collect, annotate and train on data -- is not applicable to image-based plant phenotyping.
Data costs include growing physical samples, imaging and labelling them.
Model performance is impacted by the species gap between the domain of each plant species.
arXiv Detail & Related papers (2020-03-24T10:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.