Robotic Learning the Sequence of Packing Irregular Objects from Human
Demonstrations
- URL: http://arxiv.org/abs/2210.01645v2
- Date: Wed, 8 Nov 2023 18:11:05 GMT
- Title: Robotic Learning the Sequence of Packing Irregular Objects from Human
Demonstrations
- Authors: Andr\'e Santos, Nuno Ferreira Duarte, Atabak Dehban, Jos\'e
Santos-Victor
- Abstract summary: We tackle the challenge of robotic bin packing with irregular objects, such as groceries.
Our approach is to learn directly from expert demonstrations in order to extract implicit task knowledge.
We rely on human demonstrations to learn a Markov chain for predicting the object packing sequence.
- Score: 3.58439716487063
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We tackle the challenge of robotic bin packing with irregular objects, such
as groceries. Given the diverse physical attributes of these objects and the
complex constraints governing their placement and manipulation, employing
preprogrammed strategies becomes unfeasible. Our approach is to learn directly
from expert demonstrations in order to extract implicit task knowledge and
strategies to ensure safe object positioning, efficient use of space, and the
generation of human-like behaviors that enhance human-robot trust.
We rely on human demonstrations to learn a Markov chain for predicting the
object packing sequence for a given set of items and then compare it with human
performance. Our experimental results show that the model outperforms human
performance by generating sequence predictions that humans classify as
human-like more frequently than human-generated sequences.
The human demonstrations were collected using our proposed VR platform,
BoxED, which is a box packaging environment for simulating real-world objects
and scenarios for fast and streamlined data collection with the purpose of
teaching robots. We collected data from 43 participants packing a total of 263
boxes with supermarket-like objects, yielding 4644 object manipulations. Our VR
platform can be easily adapted to new scenarios and objects, and is publicly
available, alongside our dataset, at https://github.com/andrejfsantos4/BoxED.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - MimicGen: A Data Generation System for Scalable Robot Learning using
Human Demonstrations [55.549956643032836]
MimicGen is a system for automatically synthesizing large-scale, rich datasets from only a small number of human demonstrations.
We show that robot agents can be effectively trained on this generated dataset by imitation learning to achieve strong performance in long-horizon and high-precision tasks.
arXiv Detail & Related papers (2023-10-26T17:17:31Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal
Human Demonstrations [51.87067543670535]
We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses.
We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states.
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world.
arXiv Detail & Related papers (2022-09-28T17:51:49Z) - Human-like Planning for Reaching in Cluttered Environments [11.55532557594561]
Humans are remarkably adept at reaching for objects in cluttered environments.
We identify high-level manipulation plans in humans, and transfer these skills to robot planners.
We found that the human-like planner outperformed a state-of-the-art standard trajectory optimisation algorithm.
arXiv Detail & Related papers (2020-02-28T14:28:50Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.