Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds
- URL: http://arxiv.org/abs/2010.00824v4
- Date: Thu, 1 Jul 2021 00:59:01 GMT
- Title: Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds
- Authors: Lirui Wang, Yu Xiang, Wei Yang, Arsalan Mousavian, Dieter Fox
- Abstract summary: We propose a new method for learning closed-loop control policies for 6D grasping.
Our policy takes a segmented point cloud of an object from an egocentric camera as input, and outputs continuous 6D control actions of the robot gripper for grasping the object.
- Score: 62.013872787987054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 6D robotic grasping beyond top-down bin-picking scenarios is a challenging
task. Previous solutions based on 6D grasp synthesis with robot motion planning
usually operate in an open-loop setting, which are sensitive to grasp synthesis
errors. In this work, we propose a new method for learning closed-loop control
policies for 6D grasping. Our policy takes a segmented point cloud of an object
from an egocentric camera as input, and outputs continuous 6D control actions
of the robot gripper for grasping the object. We combine imitation learning and
reinforcement learning and introduce a goal-auxiliary actor-critic algorithm
for policy learning. We demonstrate that our learned policy can be integrated
into a tabletop 6D grasping system and a human-robot handover system to improve
the grasping performance of unseen objects. Our videos and code can be found at
https://sites.google.com/view/gaddpg .
Related papers
- Hand-Object Interaction Pretraining from Videos [77.92637809322231]
We learn general robot manipulation priors from 3D hand-object interaction trajectories.
We do so by sharing both the human hand and the manipulated object in 3D space and human motions to robot actions.
We empirically demonstrate that finetuning this policy, with both reinforcement learning (RL) and behavior cloning (BC), enables sample-efficient adaptation to downstream tasks and simultaneously improves robustness and generalizability compared to prior approaches.
arXiv Detail & Related papers (2024-09-12T17:59:07Z) - Vision-based Manipulation from Single Human Video with Open-World Object Graphs [58.23098483464538]
We present an object-centric approach to empower robots to learn vision-based manipulation skills from human videos.
We introduce ORION, an algorithm that tackles the problem by extracting an object-centric manipulation plan from a single RGB-D video.
arXiv Detail & Related papers (2024-05-30T17:56:54Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - HACMan: Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation [29.01984677695523]
We introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects.
We evaluate HACMan on a 6D object pose alignment task in both simulation and in the real world.
Compared to alternative action representations, HACMan achieves a success rate more than three times higher than the best baseline.
arXiv Detail & Related papers (2023-05-06T05:55:27Z) - Decoupling Skill Learning from Robotic Control for Generalizable Object
Manipulation [35.34044822433743]
Recent works in robotic manipulation have shown potential for tackling a range of tasks.
We conjecture that this is due to the high-dimensional action space for joint control.
In this paper, we take an alternative approach and separate the task of learning 'what to do' from 'how to do it'
The whole-body robotic kinematic control is optimized to execute the high-dimensional joint motion to reach the goals in the workspace.
arXiv Detail & Related papers (2023-03-07T16:31:13Z) - Review on 6D Object Pose Estimation with the focus on Indoor Scene
Understanding [0.0]
6D object pose estimation problem has been extensively studied in the field of Computer Vision and Robotics.
As a part of our discussion, we will focus on how 6D object pose estimation can be used for understanding 3D scenes.
arXiv Detail & Related papers (2022-12-04T20:45:46Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Learning to Fold Real Garments with One Arm: A Case Study in Cloud-Based
Robotics Research [21.200764836237497]
We present the first systematic benchmarking of fabric manipulation algorithms on physical hardware.
We develop 4 novel learning-based algorithms that model expert actions, keypoints, reward functions, and dynamic motions.
The entire lifecycle of data collection, model training, and policy evaluation is performed remotely without physical access to the robot workcell.
arXiv Detail & Related papers (2022-04-21T17:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.