Learning Force Control for Contact-rich Manipulation Tasks with Rigid
Position-controlled Robots
- URL: http://arxiv.org/abs/2003.00628v3
- Date: Mon, 20 Jul 2020 02:39:28 GMT
- Title: Learning Force Control for Contact-rich Manipulation Tasks with Rigid
Position-controlled Robots
- Authors: Cristian Camilo Beltran-Hernandez, Damien Petit, Ixchel G.
Ramirez-Alpizar, Takayuki Nishi, Shinichi Kikuchi, Takamitsu Matsubara,
Kensuke Harada
- Abstract summary: We propose a learning-based force control framework combining RL techniques with traditional force control.
Within said control scheme, we implemented two different conventional approaches to achieve force control with position-controlled robots.
Finally, we developed a fail-safe mechanism for safely training an RL agent on manipulation tasks using a real rigid robot manipulator.
- Score: 9.815369993136512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement Learning (RL) methods have been proven successful in solving
manipulation tasks autonomously. However, RL is still not widely adopted on
real robotic systems because working with real hardware entails additional
challenges, especially when using rigid position-controlled manipulators. These
challenges include the need for a robust controller to avoid undesired
behavior, that risk damaging the robot and its environment, and constant
supervision from a human operator. The main contributions of this work are,
first, we proposed a learning-based force control framework combining RL
techniques with traditional force control. Within said control scheme, we
implemented two different conventional approaches to achieve force control with
position-controlled robots; one is a modified parallel position/force control,
and the other is an admittance control. Secondly, we empirically study both
control schemes when used as the action space of the RL agent. Thirdly, we
developed a fail-safe mechanism for safely training an RL agent on manipulation
tasks using a real rigid robot manipulator. The proposed methods are validated
on simulation and a real robot, an UR3 e-series robotic arm.
Related papers
- Online Behavior Modification for Expressive User Control of RL-Trained Robots [1.6078134198754157]
Online behavior modification is a paradigm in which users have control over behavior features of a robot in real time as it autonomously completes a task using an RL-trained policy.
We present a behavior diversity based algorithm, Adjustable Control Of RL Dynamics (ACORD), and demonstrate its applicability to online behavior modification in simulation and a user study.
arXiv Detail & Related papers (2024-08-15T12:28:08Z) - Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - Learning Force Control for Legged Manipulation [18.894304288225385]
We propose a method for training RL policies for direct force control without requiring access to force sensing.
We showcase our method on a whole-body control platform of a quadruped robot with an arm.
We provide the first deployment of learned whole-body force control in legged manipulators, paving the way for more versatile and adaptable legged robots.
arXiv Detail & Related papers (2024-05-02T15:53:43Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Deep Adversarial Reinforcement Learning for Object Disentangling [36.66974848126079]
We present a novel adversarial reinforcement learning (ARL) framework for disentangling waste objects.
The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.
We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task.
arXiv Detail & Related papers (2020-03-08T13:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.