High-Throughput Phenotyping using Computer Vision and Machine Learning
- URL: http://arxiv.org/abs/2407.06354v2
- Date: Wed, 10 Jul 2024 02:28:14 GMT
- Title: High-Throughput Phenotyping using Computer Vision and Machine Learning
- Authors: Vivaan Singhvi, Langalibalele Lunga, Pragya Nidhi, Chris Keum, Varrun Prakash,
- Abstract summary: We used a dataset provided by Oak Ridge National Laboratory with 1,672 images of Populus Trichocarpa with white labels displaying treatment.
Optical character recognition (OCR) was used to read these labels on the plants.
Machine learning models were used to predict treatment based on those classifications, and analyzed encoded EXIF tags were used for the purpose of finding leaf size and correlations between phenotypes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-throughput phenotyping refers to the non-destructive and efficient evaluation of plant phenotypes. In recent years, it has been coupled with machine learning in order to improve the process of phenotyping plants by increasing efficiency in handling large datasets and developing methods for the extraction of specific traits. Previous studies have developed methods to advance these challenges through the application of deep neural networks in tandem with automated cameras; however, the datasets being studied often excluded physical labels. In this study, we used a dataset provided by Oak Ridge National Laboratory with 1,672 images of Populus Trichocarpa with white labels displaying treatment (control or drought), block, row, position, and genotype. Optical character recognition (OCR) was used to read these labels on the plants, image segmentation techniques in conjunction with machine learning algorithms were used for morphological classifications, machine learning models were used to predict treatment based on those classifications, and analyzed encoded EXIF tags were used for the purpose of finding leaf size and correlations between phenotypes. We found that our OCR model had an accuracy of 94.31% for non-null text extractions, allowing for the information to be accurately placed in a spreadsheet. Our classification models identified leaf shape, color, and level of brown splotches with an average accuracy of 62.82%, and plant treatment with an accuracy of 60.08%. Finally, we identified a few crucial pieces of information absent from the EXIF tags that prevented the assessment of the leaf size. There was also missing information that prevented the assessment of correlations between phenotypes and conditions. However, future studies could improve upon this to allow for the assessment of these features.
Related papers
- Small data deep learning methodology for in-field disease detection [6.2747249113031325]
We present the first machine learning model capable of detecting mild symptoms of late blight in potato crops.
Our proposal exploits the availability of high-resolution images via the concept of patching, and is based on deep convolutional neural networks with a focal loss function.
Our model correctly detects all cases of late blight in the test dataset, demonstrating a high level of accuracy and effectiveness in identifying early symptoms.
arXiv Detail & Related papers (2024-09-25T17:31:17Z) - Evaluating Data Augmentation Techniques for Coffee Leaf Disease
Classification [2.0892083471064407]
This paper uses the RoCoLe dataset and approaches based on deep learning for classifying coffee leaf diseases from images.
Our study demonstrates the effectiveness of Transformer-based models, online augmentations, and CycleGAN augmentation in improving leaf disease classification.
arXiv Detail & Related papers (2024-01-11T09:22:36Z) - Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa [1.9089478605920305]
This work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers.
All of the few-shot learning code, data, and results are made publicly available.
arXiv Detail & Related papers (2023-01-24T23:40:01Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Data-Driven Deep Supervision for Skin Lesion Classification [36.24996525103533]
We propose a new deep neural network that exploits input data for robust feature extraction.
Specifically, we analyze the convolutional network's behavior (field-of-view) to find the location of deep supervision.
arXiv Detail & Related papers (2022-09-04T03:57:08Z) - DeepTechnome: Mitigating Unknown Bias in Deep Learning Based Assessment
of CT Images [44.62475518267084]
We debias deep learning models during training against unknown bias.
We use control regions as surrogates that carry information regarding the bias.
Applying the proposed method to learn from data exhibiting a strong bias, it near-perfectly recovers the classification performance observed when training with corresponding unbiased data.
arXiv Detail & Related papers (2022-05-26T12:18:48Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - An Effective Leaf Recognition Using Convolutional Neural Networks Based
Features [1.137457877869062]
In this paper, we propose an effective method for the leaf recognition problem.
A leaf goes through some pre-processing to extract its refined color image, vein image, xy-projection histogram, handcrafted shape, texture features, and Fourier descriptors.
These attributes are then transformed into a better representation by neural network-based encoders before a support vector machine (SVM) model is utilized to classify different leaves.
arXiv Detail & Related papers (2021-08-04T02:02:22Z) - Leaf Image-based Plant Disease Identification using Color and Texture
Features [0.1657441317977376]
The accuracy on a self-collected dataset is 82.47% for disease identification and 91.40% for healthy and diseased classification.
This prototype system can be extended by adding more disease categories or targeting specific crop or disease categories.
arXiv Detail & Related papers (2021-02-08T20:32:56Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.