Benchmarking Robot Manipulation with the Rubik's Cube
- URL: http://arxiv.org/abs/2202.07074v1
- Date: Mon, 14 Feb 2022 22:34:18 GMT
- Title: Benchmarking Robot Manipulation with the Rubik's Cube
- Authors: Boling Yang, Patrick E. Lancaster, Siddhartha S. Srinivasa, Joshua R.
Smith
- Abstract summary: We propose Rubik's cube manipulation as a benchmark to measure simultaneous performance of precise manipulation and sequential manipulation.
We present a protocol for quantitatively measuring both the accuracy and speed of Rubik's cube manipulation.
We demonstrate this protocol for two distinct baseline approaches on a PR2 robot.
- Score: 15.922643222904172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benchmarks for robot manipulation are crucial to measuring progress in the
field, yet there are few benchmarks that demonstrate critical manipulation
skills, possess standardized metrics, and can be attempted by a wide array of
robot platforms. To address a lack of such benchmarks, we propose Rubik's cube
manipulation as a benchmark to measure simultaneous performance of precise
manipulation and sequential manipulation. The sub-structure of the Rubik's cube
demands precise positioning of the robot's end effectors, while its highly
reconfigurable nature enables tasks that require the robot to manage pose
uncertainty throughout long sequences of actions. We present a protocol for
quantitatively measuring both the accuracy and speed of Rubik's cube
manipulation. This protocol can be attempted by any general-purpose
manipulator, and only requires a standard 3x3 Rubik's cube and a flat surface
upon which the Rubik's cube initially rests (e.g. a table). We demonstrate this
protocol for two distinct baseline approaches on a PR2 robot. The first
baseline provides a fundamental approach for pose-based Rubik's cube
manipulation. The second baseline demonstrates the benchmark's ability to
quantify improved performance by the system, particularly that resulting from
the integration of pre-touch sensing. To demonstrate the benchmark's
applicability to other robot platforms and algorithmic approaches, we present
the functional blocks required to enable the HERB robot to manipulate the
Rubik's cube via push-grasping.
Related papers
- Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction [51.49400490437258]
This work develops a method for imitating articulated object manipulation from a single monocular RGB human demonstration.
We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video.
Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion.
We evaluate 4D-DPM's 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD's physical execution performance on 9 objects across 10 trials each on a bimanual YuMi robot.
arXiv Detail & Related papers (2024-09-26T17:57:16Z) - Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Silver-Bullet-3D at ManiSkill 2021: Learning-from-Demonstrations and
Heuristic Rule-based Methods for Object Manipulation [118.27432851053335]
This paper presents an overview and comparative analysis of our systems designed for the following two tracks in SAPIEN ManiSkill Challenge 2021: No Interaction Track.
The No Interaction track targets for learning policies from pre-collected demonstration trajectories.
In this track, we design a Heuristic Rule-based Method (HRM) to trigger high-quality object manipulation by decomposing the task into a series of sub-tasks.
For each sub-task, the simple rule-based controlling strategies are adopted to predict actions that can be applied to robotic arms.
arXiv Detail & Related papers (2022-06-13T16:20:42Z) - Memory-based gaze prediction in deep imitation learning for robot
manipulation [2.857551605623957]
The proposed algorithm uses a Transformer-based self-attention architecture for the gaze estimation based on sequential data to implement memory.
The proposed method was evaluated with a real robot multi-object manipulation task that requires memory of the previous states.
arXiv Detail & Related papers (2022-02-10T07:30:08Z) - CubeTR: Learning to Solve The Rubiks Cube Using Transformers [0.0]
The Rubiks cube has a single solved state for quintillions of possible configurations which leads to extremely sparse rewards.
The proposed model CubeTR attends to longer sequences of actions and addresses the problem of sparse rewards.
arXiv Detail & Related papers (2021-11-11T03:17:28Z) - Real Robot Challenge using Deep Reinforcement Learning [6.332038240397164]
This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge.
The challenge is in which a three fingered robot must carry a cube along specified goal trajectories.
We use a pure reinforcement learning approach which requires minimal expert knowledge of the robotic system.
arXiv Detail & Related papers (2021-09-30T16:12:17Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z) - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning [8.365690203298966]
We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
arXiv Detail & Related papers (2021-03-17T02:11:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.