High precision control and deep learning-based corn stand counting
algorithms for agricultural robot
- URL: http://arxiv.org/abs/2103.11276v1
- Date: Sun, 21 Mar 2021 01:13:38 GMT
- Title: High precision control and deep learning-based corn stand counting
algorithms for agricultural robot
- Authors: Zhongzhong Zhang, Erkan Kayacan, Benjamin Thompson and Girish
Chowdhary
- Abstract summary: This paper presents high precision control and deep learning-based corn stand counting algorithms for a low-cost, ultra-compact 3D printed and autonomous field robot.
The robot, termed TerraSentia, is designed to automate the measurement of plant traits for efficient phenotyping.
- Score: 8.16286714346538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents high precision control and deep learning-based corn stand
counting algorithms for a low-cost, ultra-compact 3D printed and autonomous
field robot for agricultural operations. Currently, plant traits, such as
emergence rate, biomass, vigor, and stand counting, are measured manually. This
is highly labor-intensive and prone to errors. The robot, termed TerraSentia,
is designed to automate the measurement of plant traits for efficient
phenotyping as an alternative to manual measurements. In this paper, we
formulate a Nonlinear Moving Horizon Estimator (NMHE) that identifies key
terrain parameters using onboard robot sensors and a learning-based Nonlinear
Model Predictive Control (NMPC) that ensures high precision path tracking in
the presence of unknown wheel-terrain interaction. Moreover, we develop a
machine vision algorithm designed to enable an ultra-compact ground robot to
count corn stands by driving through the fields autonomously. The algorithm
leverages a deep network to detect corn plants in images, and a visual tracking
model to re-identify detected objects at different time steps. We collected
data from 53 corn plots in various fields for corn plants around 14 days after
emergence (stage V3 - V4). The robot predictions have agreed well with the
ground truth with $C_{robot}=1.02 \times C_{human}-0.86$ and a correlation
coefficient $R=0.96$. The mean relative error given by the algorithm is
$-3.78\%$, and the standard deviation is $6.76\%$. These results indicate a
first and significant step towards autonomous robot-based real-time phenotyping
using low-cost, ultra-compact ground robots for corn and potentially other
crops.
Related papers
- RoMu4o: A Robotic Manipulation Unit For Orchard Operations Automating Proximal Hyperspectral Leaf Sensing [2.1038216828914145]
Leaf-level hyperspectral spectroscopy is shown to be a powerful tool for phenotyping, monitoring crop health, identifying essential nutrients within plants as well as detecting diseases and water stress.
This work introduces RoMu4o, a robotic manipulation unit for orchard operations offering an automated solution for proximal hyperspectral leaf sensing.
arXiv Detail & Related papers (2025-01-18T01:04:02Z) - FAST: Efficient Action Tokenization for Vision-Language-Action Models [98.15494168962563]
We propose a new compression-based tokenization scheme for robot actions, based on the discrete cosine transform.
Based on FAST, we release FAST+, a universal robot action tokenizer, trained on 1M real robot action trajectories.
arXiv Detail & Related papers (2025-01-16T18:57:04Z) - Automatic Detection, Positioning and Counting of Grape Bunches Using Robots [0.0]
The Yolov3 detection network is used to realize the accurate detection of grape bunches.
The local tracking algorithm is added to eliminate relocation.
The counting of grape bunches is completed.
arXiv Detail & Related papers (2024-12-12T15:52:40Z) - Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Robot Self-Calibration Using Actuated 3D Sensors [0.0]
This paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain.
As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor.
A detailed evaluation of the system is shown on a real robot with various attached 3D sensors.
arXiv Detail & Related papers (2022-06-07T16:35:08Z) - Feature Disentanglement of Robot Trajectories [0.0]
Disentagled representation learning promises advances in unsupervised learning, but they have not been evaluated in robot-generated trajectories.
We evaluate three disentangling VAEs on a dataset of 1M robot trajectories generated from a 3 DoF robot arm.
We find that the decorrelation-based formulations perform the best in terms of disentangling metrics, trajectory quality, and correlation with ground truth latent features.
arXiv Detail & Related papers (2021-12-06T16:52:55Z) - Navigational Path-Planning For All-Terrain Autonomous Agricultural Robot [0.0]
This report compares novel algorithms for autonomous navigation of farmlands.
High-resolution grid map representation is taken into consideration specific to Indian environments.
Results proved the applicability of the algorithms for autonomous field navigation and feasibility with robotic path planning.
arXiv Detail & Related papers (2021-09-05T07:29:13Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.