Visual based Tomato Size Measurement System for an Indoor Farming
Environment
- URL: http://arxiv.org/abs/2304.06177v1
- Date: Wed, 12 Apr 2023 22:27:05 GMT
- Title: Visual based Tomato Size Measurement System for an Indoor Farming
Environment
- Authors: Andy Kweon, Vishnu Hu, Jong Yoon Lim, Trevor Gee, Edmond Liu, Henry
Williams, Bruce A. MacDonald, Mahla Nejati, Inkyu Sa, and Ho Seok Ahn
- Abstract summary: This paper presents a size measurement method combining a machine learning model and depth images captured from three low cost RGBD cameras.
The performance of the presented system is evaluated on a lab environment with real tomato fruits and fake leaves.
Our three-camera system was able to achieve a height measurement accuracy of 0.9114 and a width accuracy of 0.9443.
- Score: 3.176607626141415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As technology progresses, smart automated systems will serve an increasingly
important role in the agricultural industry. Current existing vision systems
for yield estimation face difficulties in occlusion and scalability as they
utilize a camera system that is large and expensive, which are unsuitable for
orchard environments. To overcome these problems, this paper presents a size
measurement method combining a machine learning model and depth images captured
from three low cost RGBD cameras to detect and measure the height and width of
tomatoes. The performance of the presented system is evaluated on a lab
environment with real tomato fruits and fake leaves to simulate occlusion in
the real farm environment. To improve accuracy by addressing fruit occlusion,
our three-camera system was able to achieve a height measurement accuracy of
0.9114 and a width accuracy of 0.9443.
Related papers
- A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics [30.46518628656399]
We propose the first publicly available 3D shape completion dataset for agricultural vision systems.
We provide an RGB-D dataset for estimating the 3D shape of fruits.
arXiv Detail & Related papers (2024-07-18T09:07:23Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - TomatoDIFF: On-plant Tomato Segmentation with Denoising Diffusion Models [3.597418929000278]
TomatoDIFF is a novel diffusion-based model for semantic segmentation of on-plant tomatoes.
Tomatopia is a new, large and challenging dataset of greenhouse tomatoes.
arXiv Detail & Related papers (2023-07-03T14:43:40Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Look how they have grown: Non-destructive Leaf Detection and Size
Estimation of Tomato Plants for 3D Growth Monitoring [4.303287713669109]
In this paper, an automated non-destructive imaged-based measuring system is presented.
It uses 2D and 3D data obtained using a Zivid 3D camera, creating 3D virtual representations (digital twins) of the tomato plants.
The performance of the platform has been measured through a comprehensive trial on real-world tomato plants.
arXiv Detail & Related papers (2023-04-07T12:16:10Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - Geometry-Aware Fruit Grasping Estimation for Robotic Harvesting in
Orchards [6.963582954232132]
geometry-aware network, A3N, is proposed to perform end-to-end instance segmentation and grasping estimation.
We implement a global-to-local scanning strategy, which enables robots to accurately recognise and retrieve fruits in field environments.
Overall, the robotic system achieves success rate of harvesting ranging from 70% - 85% in field harvesting experiments.
arXiv Detail & Related papers (2021-12-08T16:17:26Z) - 3D shape sensing and deep learning-based segmentation of strawberries [5.634825161148484]
We evaluate modern sensing technologies including stereo and time-of-flight cameras for 3D perception of shape in agriculture.
We propose a novel 3D deep neural network which exploits the organised nature of information originating from the camera-based 3D sensors.
arXiv Detail & Related papers (2021-11-26T18:43:10Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.