A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in
Cluttered Environments
- URL: http://arxiv.org/abs/2309.17170v1
- Date: Fri, 29 Sep 2023 12:07:08 GMT
- Title: A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in
Cluttered Environments
- Authors: Luuk van den Bent, Tom\'as Coleman, Robert Babuska
- Abstract summary: We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest.
The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem.
Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile.
- Score: 4.5195969272623815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, truss tomato weighing and packaging require significant manual
work. The main obstacle to automation lies in the difficulty of developing a
reliable robotic grasping system for already harvested trusses. We propose a
method to grasp trusses that are stacked in a crate with considerable clutter,
which is how they are commonly stored and transported after harvest. The method
consists of a deep learning-based vision system to first identify the
individual trusses in the crate and then determine a suitable grasping location
on the stem. To this end, we have introduced a grasp pose ranking algorithm
with online learning capabilities. After selecting the most promising grasp
pose, the robot executes a pinch grasp without needing touch sensors or
geometric models. Lab experiments with a robotic manipulator equipped with an
eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all
trusses from a pile. 93% of the trusses were successfully grasped on the first
try, while the remaining 7% required more attempts.
Related papers
- DITTO: Demonstration Imitation by Trajectory Transformation [31.930923345163087]
In this work, we address the problem of one-shot imitation from a single human demonstration, given by an RGB-D video recording.
We propose a two-stage process. In the first stage we extract the demonstration trajectory offline. This entails segmenting manipulated objects and determining their relative motion in relation to secondary objects such as containers.
In the online trajectory generation stage, we first re-detect all objects, then warp the demonstration trajectory to the current scene and execute it on the robot.
arXiv Detail & Related papers (2024-03-22T13:46:51Z) - simPLE: a visuotactile method learned in simulation to precisely pick,
localize, regrasp, and place objects [16.178331266949293]
This paper explores solutions for precise and general pick-and-place.
We propose simPLE as a solution to precise pick-and-place.
SimPLE learns to pick, regrasp and place objects precisely, given only the object CAD model and no prior experience.
arXiv Detail & Related papers (2023-07-24T21:22:58Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Grasping Student: semi-supervised learning for robotic manipulation [0.7282230325785884]
We design a semi-supervised grasping system that takes advantage of images of products to be picked, which are collected without any interactions with the robot.
In the regime of a small number of robot training samples, taking advantage of the unlabeled data allows us to achieve performance at the level of 10-fold bigger dataset size.
arXiv Detail & Related papers (2023-03-08T09:03:11Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Geometry-Aware Fruit Grasping Estimation for Robotic Harvesting in
Orchards [6.963582954232132]
geometry-aware network, A3N, is proposed to perform end-to-end instance segmentation and grasping estimation.
We implement a global-to-local scanning strategy, which enables robots to accurately recognise and retrieve fruits in field environments.
Overall, the robotic system achieves success rate of harvesting ranging from 70% - 85% in field harvesting experiments.
arXiv Detail & Related papers (2021-12-08T16:17:26Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Geometry-Based Grasping of Vine Tomatoes [6.547498821163685]
We propose a geometry-based grasping method for vine tomatoes.
It relies on a computer-vision pipeline to identify the required geometric features of the tomatoes and of the truss stem.
The grasping method then uses a geometric model of the robotic hand and the truss to determine a suitable grasping location on the stem.
arXiv Detail & Related papers (2021-03-01T19:33:51Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds [62.013872787987054]
We propose a new method for learning closed-loop control policies for 6D grasping.
Our policy takes a segmented point cloud of an object from an egocentric camera as input, and outputs continuous 6D control actions of the robot gripper for grasping the object.
arXiv Detail & Related papers (2020-10-02T07:42:00Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.