Look how they have grown: Non-destructive Leaf Detection and Size
Estimation of Tomato Plants for 3D Growth Monitoring
- URL: http://arxiv.org/abs/2304.03610v1
- Date: Fri, 7 Apr 2023 12:16:10 GMT
- Title: Look how they have grown: Non-destructive Leaf Detection and Size
Estimation of Tomato Plants for 3D Growth Monitoring
- Authors: Yuning Xing, Dexter Pham, Henry Williams, David Smith, Ho Seok Ahn,
JongYoon Lim, Bruce A. MacDonald, Mahla Nejati
- Abstract summary: In this paper, an automated non-destructive imaged-based measuring system is presented.
It uses 2D and 3D data obtained using a Zivid 3D camera, creating 3D virtual representations (digital twins) of the tomato plants.
The performance of the platform has been measured through a comprehensive trial on real-world tomato plants.
- Score: 4.303287713669109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart farming is a growing field as technology advances. Plant
characteristics are crucial indicators for monitoring plant growth. Research
has been done to estimate characteristics like leaf area index, leaf disease,
and plant height. However, few methods have been applied to non-destructive
measurements of leaf size. In this paper, an automated non-destructive
imaged-based measuring system is presented, which uses 2D and 3D data obtained
using a Zivid 3D camera, creating 3D virtual representations (digital twins) of
the tomato plants. Leaves are detected from corresponding 2D RGB images and
mapped to their 3D point cloud using the detected leaf masks, which then pass
the leaf point cloud to the plane fitting algorithm to extract the leaf size to
provide data for growth monitoring. The performance of the measurement platform
has been measured through a comprehensive trial on real-world tomato plants
with quantified performance metrics compared to ground truth measurements.
Three tomato leaf and height datasets (including 50+ 3D point cloud files of
tomato plants) were collected and open-sourced in this project. The proposed
leaf size estimation method demonstrates an RMSE value of 4.47mm and an R^2
value of 0.87. The overall measurement system (leaf detection and size
estimation algorithms combine) delivers an RMSE value of 8.13mm and an R^2
value of 0.899.
Related papers
- A Smartphone-Based Method for Assessing Tomato Nutrient Status through Trichome Density Measurement [0.0]
Early detection of fertilizer-induced stress in tomato plants is crucial for timely crop management interventions and yield optimization.
This study proposes a novel, noninvasive technique for quantifying the density of trichomes-elongated hair-like structures found on plant surfaces-on young leaves using a smartphone.
arXiv Detail & Related papers (2024-04-30T12:45:41Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Early and Accurate Detection of Tomato Leaf Diseases Using TomFormer [0.3169023552218211]
This paper introduces a transformer-based model called TomFormer for the purpose of tomato leaf disease detection.
We present a novel approach for detecting tomato leaf diseases by employing a fusion model that combines a visual transformer and a convolutional neural network.
arXiv Detail & Related papers (2023-12-26T20:47:23Z) - PlantPlotGAN: A Physics-Informed Generative Adversarial Network for
Plant Disease Prediction [2.7409168462107347]
We propose PlantPlotGAN, a physics-informed generative model capable of creating synthetic multispectral plot images with realistic vegetation indices.
The results demonstrate that the synthetic imagery generated from PlantPlotGAN outperforms state-of-the-art methods regarding the Fr'echet inception distance.
arXiv Detail & Related papers (2023-10-27T16:56:28Z) - Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights [50.52704854147297]
We present a new vision transformer (ViT) model optimized with a classification (discrete) and a continuous loss function.
This model achieves better accuracy than previously used convolutional based approaches (ConvNets) optimized with only a continuous loss function.
arXiv Detail & Related papers (2023-04-22T22:39:03Z) - Visual based Tomato Size Measurement System for an Indoor Farming
Environment [3.176607626141415]
This paper presents a size measurement method combining a machine learning model and depth images captured from three low cost RGBD cameras.
The performance of the presented system is evaluated on a lab environment with real tomato fruits and fake leaves.
Our three-camera system was able to achieve a height measurement accuracy of 0.9114 and a width accuracy of 0.9443.
arXiv Detail & Related papers (2023-04-12T22:27:05Z) - 3D Reconstruction-Based Seed Counting of Sorghum Panicles for
Agricultural Inspection [4.328589704462156]
We present a method for creating high-quality 3D models of sorghum panicles for phenotyping in breeding experiments.
This is achieved with a novel reconstruction approach that uses seeds as semantic landmarks in both 2D and 3D.
We demonstrate that using this method to estimate seed count and weight for sorghum outperforms count extrapolation from 2D images.
arXiv Detail & Related papers (2022-11-14T20:51:09Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.