Geometry-Aware Fruit Grasping Estimation for Robotic Harvesting in
Orchards
- URL: http://arxiv.org/abs/2112.04363v1
- Date: Wed, 8 Dec 2021 16:17:26 GMT
- Title: Geometry-Aware Fruit Grasping Estimation for Robotic Harvesting in
Orchards
- Authors: Hanwen Kang, Xing Wang, and Chao Chen
- Abstract summary: geometry-aware network, A3N, is proposed to perform end-to-end instance segmentation and grasping estimation.
We implement a global-to-local scanning strategy, which enables robots to accurately recognise and retrieve fruits in field environments.
Overall, the robotic system achieves success rate of harvesting ranging from 70% - 85% in field harvesting experiments.
- Score: 6.963582954232132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Field robotic harvesting is a promising technique in recent development of
agricultural industry. It is vital for robots to recognise and localise fruits
before the harvesting in natural orchards. However, the workspace of harvesting
robots in orchards is complex: many fruits are occluded by branches and leaves.
It is important to estimate a proper grasping pose for each fruit before
performing the manipulation. In this study, a geometry-aware network, A3N, is
proposed to perform end-to-end instance segmentation and grasping estimation
using both color and geometry sensory data from a RGB-D camera. Besides,
workspace geometry modelling is applied to assist the robotic manipulation.
Moreover, we implement a global-to-local scanning strategy, which enables
robots to accurately recognise and retrieve fruits in field environments with
two consumer-level RGB-D cameras. We also evaluate the accuracy and robustness
of proposed network comprehensively in experiments. The experimental results
show that A3N achieves 0.873 on instance segmentation accuracy, with an average
computation time of 35 ms. The average accuracy of grasping estimation is 0.61
cm and 4.8$^{\circ}$ in centre and orientation, respectively. Overall, the
robotic system that utilizes the global-to-local scanning and A3N, achieves
success rate of harvesting ranging from 70\% - 85\% in field harvesting
experiments.
Related papers
- A Dataset and Benchmark for Shape Completion of Fruits for Agricultural Robotics [30.46518628656399]
We propose the first publicly available 3D shape completion dataset for agricultural vision systems.
We provide an RGB-D dataset for estimating the 3D shape of fruits.
arXiv Detail & Related papers (2024-07-18T09:07:23Z) - Key Point-based Orientation Estimation of Strawberries for Robotic Fruit
Picking [8.657107511095242]
We introduce a novel key-point-based fruit orientation estimation method allowing for the prediction of 3D orientation from 2D images directly.
Our proposed method achieves state-of-the-art performance with an average error as low as $8circ$, improving predictions by $sim30%$ compared to previous work presented incitewagnerefficient.
arXiv Detail & Related papers (2023-10-17T15:12:11Z) - Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care
Environments [52.425280825457385]
This paper introduces an annotated dataset of real environments.
The captured environments represent areas which are already in use in the field of robotic health care research.
We also provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.
arXiv Detail & Related papers (2023-10-09T10:35:37Z) - Panoptic Mapping with Fruit Completion and Pose Estimation for
Horticultural Robots [33.21287030243106]
Monitoring plants and fruits at high resolution play a key role in the future of agriculture.
Accurate 3D information can pave the way to a diverse number of robotic applications in agriculture ranging from autonomous harvesting to precise yield estimation.
We address the problem of jointly estimating complete 3D shapes of fruit and their pose in a 3D multi-resolution map built by a mobile robot.
arXiv Detail & Related papers (2023-03-15T20:41:24Z) - Semantic Segmentation of Fruits on Multi-sensor Fused Data in Natural
Orchards [5.733573598657243]
We propose a deep-learning-based segmentation method to perform accurate semantic segmentation on fused data from a LiDAR-Camera visual sensor.
In the experiment, we comprehensively analyze the network setup when dealing with highly unstructured and noisy point clouds acquired from an apple orchard.
The experiment results show that the proposed method can perform accurate segmentation in real orchard environments.
arXiv Detail & Related papers (2022-08-04T06:17:07Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z) - Strawberry Detection Using a Heterogeneous Multi-Processor Platform [1.5171938155576565]
This paper proposes using the You Only Look Once version 3 (YOLOv3) Convolutional Neural Network (CNN) in combination with utilising image processing techniques for the application of precision farming robots.
The results show a performance acceleration by five times when implemented on a Field-Programmable Gate Array (FPGA) when compared with the same algorithm running on the processor side.
arXiv Detail & Related papers (2020-11-07T01:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.