Robotic Grasping of Harvested Tomato Trusses Using Vision and Online Learning
- URL: http://arxiv.org/abs/2309.17170v2
- Date: Wed, 12 Feb 2025 10:09:27 GMT
- Title: Robotic Grasping of Harvested Tomato Trusses Using Vision and Online Learning
- Authors: Luuk van den Bent, Tomás Coleman, Robert Babuška,
- Abstract summary: We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest.
The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem.
Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile.
- Score: 0.0
- License:
- Abstract: Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.
Related papers
- RoMu4o: A Robotic Manipulation Unit For Orchard Operations Automating Proximal Hyperspectral Leaf Sensing [2.1038216828914145]
Leaf-level hyperspectral spectroscopy is shown to be a powerful tool for phenotyping, monitoring crop health, identifying essential nutrients within plants as well as detecting diseases and water stress.
This work introduces RoMu4o, a robotic manipulation unit for orchard operations offering an automated solution for proximal hyperspectral leaf sensing.
arXiv Detail & Related papers (2025-01-18T01:04:02Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Contact Energy Based Hindsight Experience Prioritization [19.42106651692228]
Multi-goal robot manipulation tasks with sparse rewards are difficult for reinforcement learning (RL) algorithms.
Recent algorithms such as Hindsight Experience Replay (HER) expedite learning by taking advantage of failed trajectories.
We propose a novel approach Contact Energy Based Prioritization(CEBP) to select the samples from the replay buffer based on rich information due to contact.
arXiv Detail & Related papers (2023-12-05T11:32:25Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Grasping Student: semi-supervised learning for robotic manipulation [0.7282230325785884]
We design a semi-supervised grasping system that takes advantage of images of products to be picked, which are collected without any interactions with the robot.
In the regime of a small number of robot training samples, taking advantage of the unlabeled data allows us to achieve performance at the level of 10-fold bigger dataset size.
arXiv Detail & Related papers (2023-03-08T09:03:11Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Geometry-Aware Fruit Grasping Estimation for Robotic Harvesting in
Orchards [6.963582954232132]
geometry-aware network, A3N, is proposed to perform end-to-end instance segmentation and grasping estimation.
We implement a global-to-local scanning strategy, which enables robots to accurately recognise and retrieve fruits in field environments.
Overall, the robotic system achieves success rate of harvesting ranging from 70% - 85% in field harvesting experiments.
arXiv Detail & Related papers (2021-12-08T16:17:26Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Geometry-Based Grasping of Vine Tomatoes [6.547498821163685]
We propose a geometry-based grasping method for vine tomatoes.
It relies on a computer-vision pipeline to identify the required geometric features of the tomatoes and of the truss stem.
The grasping method then uses a geometric model of the robotic hand and the truss to determine a suitable grasping location on the stem.
arXiv Detail & Related papers (2021-03-01T19:33:51Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds [62.013872787987054]
We propose a new method for learning closed-loop control policies for 6D grasping.
Our policy takes a segmented point cloud of an object from an egocentric camera as input, and outputs continuous 6D control actions of the robot gripper for grasping the object.
arXiv Detail & Related papers (2020-10-02T07:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.