Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration
- URL: http://arxiv.org/abs/2309.12856v1
- Date: Fri, 22 Sep 2023 13:30:26 GMT
- Title: Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration
- Authors: Ekrem Misimi, Alexander Olofsson, Aleksander Eilertsen, Elling Ruud
{\O}ye, John Reidar Mathiassen
- Abstract summary: We propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy.
The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
- Score: 79.76009817889397
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The robotic handling of compliant and deformable food raw materials,
characterized by high biological variation, complex geometrical 3D shapes, and
mechanical structures and texture, is currently in huge demand in the ocean
space, agricultural, and food industries. Many tasks in these industries are
performed manually by human operators who, due to the laborious and tedious
nature of their tasks, exhibit high variability in execution, with variable
outcomes. The introduction of robotic automation for most complex processing
tasks has been challenging due to current robot learning policies. A more
consistent learning policy involving skilled operators is desired. In this
paper, we address the problem of robot learning when presented with
inconsistent demonstrations. To this end, we propose a robust learning policy
based on Learning from Demonstration (LfD) for robotic grasping of food
compliant objects. The approach uses a merging of RGB-D images and tactile data
in order to estimate the necessary pose of the gripper, gripper finger
configuration and forces exerted on the object in order to achieve effective
robot handling. During LfD training, the gripper pose, finger configurations
and tactile values for the fingers, as well as RGB-D images are saved. We
present an LfD learning policy that automatically removes inconsistent
demonstrations, and estimates the teacher's intended policy. The performance of
our approach is validated and demonstrated for fragile and compliant food
objects with complex 3D shapes. The proposed approach has a vast range of
potential applications in the aforementioned industry sectors.
Related papers
- Grounding Robot Policies with Visuomotor Language Guidance [15.774237279917594]
We propose an agent-based framework for grounding robot policies to the current context.
The proposed framework is composed of a set of conversational agents designed for specific roles.
We demonstrate that our approach can effectively guide manipulation policies to achieve significantly higher success rates.
arXiv Detail & Related papers (2024-10-09T02:00:37Z) - ManiFoundation Model for General-Purpose Robotic Manipulation of Contact Synthesis with Arbitrary Objects and Robots [24.035706461949715]
There is a pressing need to develop a model that enables general-purpose robots to undertake a broad spectrum of manipulation tasks.
Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation.
Our model achieves average success rates of around 90%.
arXiv Detail & Related papers (2024-05-11T09:18:37Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by
Generative Pre-trained Heterogeneous Graph Transformer [34.86946655775187]
Soft object manipulation tasks in domestic scenes pose a significant challenge for existing robotic skill learning techniques.
We propose a pre-trained soft object manipulation skill learning model, namely SoftGPT, that is trained using large amounts of exploration data.
For each downstream task, a goal-oriented policy agent is trained to predict the subsequent actions, and SoftGPT generates the consequences.
arXiv Detail & Related papers (2023-06-22T05:48:22Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Learning to Regrasp by Learning to Place [19.13976401970985]
Regrasping is needed when a robot's current grasp pose fails to perform desired manipulation tasks.
We propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations.
We show that our system is able to achieve 73.3% success rate of regrasping diverse objects.
arXiv Detail & Related papers (2021-09-18T03:07:06Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z) - Towards Intelligent Pick and Place Assembly of Individualized Products
Using Reinforcement Learning [0.0]
We aim to teach a collaborative robot to successfully perform pick and place tasks by implementing reinforcement learning.
For the assembly of an individualized product in a constantly changing manufacturing environment, the simulated geometric and dynamic parameters will be varied.
The robot will gain its input data from sensors, area scan cameras, and 3D cameras used to generate height maps of the environment and the objects.
arXiv Detail & Related papers (2020-02-11T15:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.