Identifying Water Stress in Chickpea Plant by Analyzing Progressive
Changes in Shoot Images using Deep Learning
- URL: http://arxiv.org/abs/2104.07911v1
- Date: Fri, 16 Apr 2021 06:23:19 GMT
- Title: Identifying Water Stress in Chickpea Plant by Analyzing Progressive
Changes in Shoot Images using Deep Learning
- Authors: Shiva Azimi, Rohan Wadhawan, and Tapan K. Gandhi
- Abstract summary: We develop an LSTM-CNN architecture to learn visual-temporal patterns and predict the water stress category with high confidence.
Our proposed model has resulted in the ceiling level classification performance of textbf98.52% on JG-62 and textbf97.78% on Pusa-372 and the chickpea plant data.
- Score: 0.41998444721319217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To meet the needs of a growing world population, we need to increase the
global agricultural yields by employing modern, precision, and automated
farming methods. In the recent decade, high-throughput plant phenotyping
techniques, which combine non-invasive image analysis and machine learning,
have been successfully applied to identify and quantify plant health and
diseases. However, these image-based machine learning usually do not consider
plant stress's progressive or temporal nature. This time-invariant approach
also requires images showing severe signs of stress to ensure high confidence
detections, thereby reducing this approach's feasibility for early detection
and recovery of plants under stress. In order to overcome the problem mentioned
above, we propose a temporal analysis of the visual changes induced in the
plant due to stress and apply it for the specific case of water stress
identification in Chickpea plant shoot images. For this, we have considered an
image dataset of two chickpea varieties JG-62 and Pusa-372, under three water
stress conditions; control, young seedling, and before flowering, captured over
five months. We then develop an LSTM-CNN architecture to learn visual-temporal
patterns from this dataset and predict the water stress category with high
confidence. To establish a baseline context, we also conduct a comparative
analysis of the CNN architecture used in the proposed model with the other CNN
techniques used for the time-invariant classification of water stress. The
results reveal that our proposed LSTM-CNN model has resulted in the ceiling
level classification performance of \textbf{98.52\%} on JG-62 and
\textbf{97.78\%} on Pusa-372 and the chickpea plant data. Lastly, we perform an
ablation study to determine the LSTM-CNN model's performance on decreasing the
amount of temporal session data used for training.
Related papers
- An Explainable Vision Transformer with Transfer Learning Combined with Support Vector Machine Based Efficient Drought Stress Identification [0.0]
Vision transformers (ViTs) present a promising alternative in capturing long-range dependencies and intricate spatial relationships.
We propose an explainable deep learning pipeline that leverages the power of ViTs for drought stress detection in potato crops using aerial imagery.
Our findings demonstrate that the proposed methods not only achieve high accuracy in drought stress identification but also shedding light on the diverse subtle plant features associated with drought stress.
arXiv Detail & Related papers (2024-07-31T15:08:26Z) - Explainable Light-Weight Deep Learning Pipeline for Improved Drought Stress Identification [0.0]
Early identification of drought stress in crops is vital for implementing effective mitigation measures and reducing yield loss.
Our work proposes a novel deep learning framework for classifying drought stress in potato crops captured by UAVs in natural settings.
A key innovation of our work involves the integration of Gradient-Class Activation Mapping (Grad-CAM), an explainability technique.
arXiv Detail & Related papers (2024-04-15T18:26:03Z) - PlantPlotGAN: A Physics-Informed Generative Adversarial Network for
Plant Disease Prediction [2.7409168462107347]
We propose PlantPlotGAN, a physics-informed generative model capable of creating synthetic multispectral plot images with realistic vegetation indices.
The results demonstrate that the synthetic imagery generated from PlantPlotGAN outperforms state-of-the-art methods regarding the Fr'echet inception distance.
arXiv Detail & Related papers (2023-10-27T16:56:28Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Towards Generating Large Synthetic Phytoplankton Datasets for Efficient
Monitoring of Harmful Algal Blooms [77.25251419910205]
Harmful algal blooms (HABs) cause significant fish deaths in aquaculture farms.
Currently, the standard method to enumerate harmful algae and other phytoplankton is to manually observe and count them under a microscope.
We employ Generative Adversarial Networks (GANs) to generate synthetic images.
arXiv Detail & Related papers (2022-08-03T20:15:55Z) - Automatic Plant Cover Estimation with CNNs Automatic Plant Cover
Estimation with Convolutional Neural Networks [8.361945776819528]
We investigate approaches using convolutional neural networks (CNNs) to automatically extract the relevant data from images.
We find that we outperform our previous approach at higher image resolutions using a custom CNN with a mean absolute error of 5.16%.
In addition to these investigations, we also conduct an error analysis based on the temporal aspect of the plant cover images.
arXiv Detail & Related papers (2021-06-21T14:52:01Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.