COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing
- URL: http://arxiv.org/abs/2011.11270v1
- Date: Mon, 23 Nov 2020 08:20:21 GMT
- Title: COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing
- Authors: Zhuo Xu, Wenhao Yu, Alexander Herzog, Wenlong Lu, Chuyuan Fu,
Masayoshi Tomizuka, Yunfei Bai, C. Karen Liu, Daniel Ho
- Abstract summary: General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
- Score: 87.7257446869134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: General contact-rich manipulation problems are long-standing challenges in
robotics due to the difficulty of understanding complicated contact physics.
Deep reinforcement learning (RL) has shown great potential in solving robot
manipulation tasks. However, existing RL policies have limited adaptability to
environments with diverse dynamics properties, which is pivotal in solving many
contact-rich manipulation tasks. In this work, we propose Contact-aware Online
COntext Inference (COCOI), a deep RL method that encodes a context embedding of
dynamics properties online using contact-rich interactions. We study this
method based on a novel and challenging non-planar pushing task, where the
robot uses a monocular camera image and wrist force torque sensor reading to
push an object to a goal location while keeping it upright. We run extensive
experiments to demonstrate the capability of COCOI in a wide range of settings
and dynamics properties in simulation, and also in a sim-to-real transfer
scenario on a real robot (Video: https://youtu.be/nrmJYksh1Kc)
Related papers
- Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Zero-Shot Transfer of Haptics-Based Object Insertion Policies [11.711534127073492]
Humans naturally exploit haptic feedback during contact-rich tasks like loading a dishwasher or stocking a bookshelf.
Current robotic systems focus on avoiding unexpected contact, often relying on strategically placed environment sensors.
We train a contact-exploiting manipulation policy in simulation for the contact-rich household task of loading plates into a slotted holder.
arXiv Detail & Related papers (2023-01-29T23:57:43Z) - Accelerating Interactive Human-like Manipulation Learning with GPU-based
Simulation and High-quality Demonstrations [25.393382192511716]
We present an immersive virtual reality teleoperation interface designed for interactive human-like manipulation on contact rich tasks.
We demonstrate the complementary strengths of massively parallel RL and imitation learning, yielding robust and natural behaviors.
arXiv Detail & Related papers (2022-12-05T09:37:27Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - A Contact-Safe Reinforcement Learning Framework for Contact-Rich Robot
Manipulation [5.0768619194124005]
We propose a contact-safe reinforcement learning framework for contact-rich robot manipulation.
When the RL policy causes unexpected collisions between the robot arm and the environment, our framework is able to immediately detect the collision and ensure the contact force to be small.
Our method is able to keep the contact force small both in task space and joint space even when the policy is under unseen scenario with unexpected collision.
arXiv Detail & Related papers (2022-07-27T10:35:44Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Sim2Real for Peg-Hole Insertion with Eye-in-Hand Camera [58.720142291102135]
We use a simulator to learn the peg-hole insertion problem and then transfer the learned model to the real robot.
We show that the transferred policy, which only takes RGB-D and joint information (proprioception) can perform well on the real robot.
arXiv Detail & Related papers (2020-05-29T05:58:54Z) - Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks [70.56451186797436]
We study how to use meta-reinforcement learning to solve the bulk of the problem in simulation.
We demonstrate our approach by training an agent to successfully perform challenging real-world insertion tasks.
arXiv Detail & Related papers (2020-04-29T18:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.