T-REX: Vision-Based System for Autonomous Leaf Detection and Grasp Estimation
- URL: http://arxiv.org/abs/2505.01654v1
- Date: Sat, 03 May 2025 02:17:45 GMT
- Title: T-REX: Vision-Based System for Autonomous Leaf Detection and Grasp Estimation
- Authors: Srecharan Selvam, Abhisesh Silwal, George Kantor,
- Abstract summary: T-Rex is a gantry-based robotic system developed for autonomous leaf localization, selection, and grasping in greenhouse environments.<n>The system integrates a 6-degree-of-freedom manipulator with a stereo vision pipeline to identify and interact with target leaves.
- Score: 5.059120569845977
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: T-Rex (The Robot for Extracting Leaf Samples) is a gantry-based robotic system developed for autonomous leaf localization, selection, and grasping in greenhouse environments. The system integrates a 6-degree-of-freedom manipulator with a stereo vision pipeline to identify and interact with target leaves. YOLOv8 is used for real-time leaf segmentation, and RAFT-Stereo provides dense depth maps, allowing the reconstruction of 3D leaf masks. These observations are processed through a leaf grasping algorithm that selects the optimal leaf based on clutter, visibility, and distance, and determines a grasp point by analyzing local surface flatness, top-down approachability, and margin from edges. The selected grasp point guides a trajectory executed by ROS-based motion controllers, driving a custom microneedle-equipped end-effector to clamp the leaf and simulate tissue sampling. Experiments conducted with artificial plants under varied poses demonstrate that the T-Rex system can consistently detect, plan, and perform physical interactions with plant-like targets, achieving a grasp success rate of 66.6\%. This paper presents the system architecture, implementation, and testing of T-Rex as a step toward plant sampling automation in Controlled Environment Agriculture (CEA).
Related papers
- MatchPlant: An Open-Source Pipeline for UAV-Based Single-Plant Detection and Data Extraction [0.0]
This study presents MatchPlant, a modular, graphical user interface-supported, open-source Python pipeline for UAV-based single-plant detection and geospatial trait extraction.<n>MatchPlant enables end-to-end by integrating UAV image processing, user-guided annotation, Conal Neural Network model training for object detection, forward projection of bounding boxes onto an orthomosaic, and shapefile generation for phenotypic analysis.
arXiv Detail & Related papers (2025-06-14T01:09:45Z) - Self-Supervised Learning for Robotic Leaf Manipulation: A Hybrid Geometric-Neural Approach [0.0]
We propose a novel hybrid geometric-neural approach for autonomous leaf grasping.<n>Our method integrates traditional computer vision with neural networks through self-supervised learning.<n>Our approach achieves an 88.0% success rate in controlled environments and 84.7% in real greenhouse conditions.
arXiv Detail & Related papers (2025-05-06T17:22:21Z) - RoMu4o: A Robotic Manipulation Unit For Orchard Operations Automating Proximal Hyperspectral Leaf Sensing [2.1038216828914145]
Leaf-level hyperspectral spectroscopy is shown to be a powerful tool for phenotyping, monitoring crop health, identifying essential nutrients within plants as well as detecting diseases and water stress.<n>This work introduces RoMu4o, a robotic manipulation unit for orchard operations offering an automated solution for proximal hyperspectral leaf sensing.
arXiv Detail & Related papers (2025-01-18T01:04:02Z) - Look how they have grown: Non-destructive Leaf Detection and Size
Estimation of Tomato Plants for 3D Growth Monitoring [4.303287713669109]
In this paper, an automated non-destructive imaged-based measuring system is presented.
It uses 2D and 3D data obtained using a Zivid 3D camera, creating 3D virtual representations (digital twins) of the tomato plants.
The performance of the platform has been measured through a comprehensive trial on real-world tomato plants.
arXiv Detail & Related papers (2023-04-07T12:16:10Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Statistical shape representations for temporal registration of plant
components in 3D [5.349852254138086]
We demonstrate how using shape features improves temporal organ matching.
This is essential for robotic crop monitoring, which enables whole-of-lifecycle phenotyping.
arXiv Detail & Related papers (2022-09-23T11:11:10Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - An Effective Leaf Recognition Using Convolutional Neural Networks Based
Features [1.137457877869062]
In this paper, we propose an effective method for the leaf recognition problem.
A leaf goes through some pre-processing to extract its refined color image, vein image, xy-projection histogram, handcrafted shape, texture features, and Fourier descriptors.
These attributes are then transformed into a better representation by neural network-based encoders before a support vector machine (SVM) model is utilized to classify different leaves.
arXiv Detail & Related papers (2021-08-04T02:02:22Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized
3D Point Clouds [59.98665358527686]
We propose a new method for segmentation-free joint estimation of orthogonal planes.
Such unified scene exploration allows for multitudes of applications such as semantic plane detection or local and global scan alignment.
Our experiments demonstrate the validity of our approach in numerous scenarios from wall detection to 6D tracking.
arXiv Detail & Related papers (2020-01-21T06:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.