Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance
Action Space
- URL: http://arxiv.org/abs/2110.09904v2
- Date: Wed, 20 Oct 2021 08:35:13 GMT
- Title: Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance
Action Space
- Authors: Maximilian Ulmer, Elie Aljalbout, Sascha Schwarz, and Sami Haddadin
- Abstract summary: Reinforcement Learning has led to promising results on a range of challenging decision-making tasks.
Fast human-like adaptive control methods can optimize complex robotic interactions, yet fail to integrate multimodal feedback needed for unstructured tasks.
We propose to factor the learning problem in a hierarchical learning and adaption architecture to get the best of both worlds.
- Score: 7.116986445066885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent agents must be able to think fast and slow to perform elaborate
manipulation tasks. Reinforcement Learning (RL) has led to many promising
results on a range of challenging decision-making tasks. However, in real-world
robotics, these methods still struggle, as they require large amounts of
expensive interactions and have slow feedback loops. On the other hand, fast
human-like adaptive control methods can optimize complex robotic interactions,
yet fail to integrate multimodal feedback needed for unstructured tasks. In
this work, we propose to factor the learning problem in a hierarchical learning
and adaption architecture to get the best of both worlds. The framework
consists of two components, a slow reinforcement learning policy optimizing the
task strategy given multimodal observations, and a fast, real-time adaptive
control policy continuously optimizing the motion, stability, and effort of the
manipulator. We combine these components through a bio-inspired action space
that we call AFORCE. We demonstrate the new action space on a contact-rich
manipulation task on real hardware and evaluate its performance on three
simulated manipulation tasks. Our experiments show that AFORCE drastically
improves sample efficiency while reducing energy consumption and improving
safety.
Related papers
- Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning [47.785786984974855]
We present a human-in-the-loop vision-based RL system that demonstrates impressive performance on a diverse set of dexterous manipulation tasks.
Our approach integrates demonstrations and human corrections, efficient RL algorithms, and other system-level design choices to learn policies.
We show that our method significantly outperforms imitation learning baselines and prior RL approaches, with an average 2x improvement in success rate and 1.8x faster execution.
arXiv Detail & Related papers (2024-10-29T08:12:20Z) - Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition [10.072992621244042]
We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
arXiv Detail & Related papers (2023-11-19T10:19:22Z) - REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous
Manipulation [61.7171775202833]
We introduce an efficient system for learning dexterous manipulation skills withReinforcement learning.
The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping.
Our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy.
arXiv Detail & Related papers (2023-09-06T19:05:31Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Complex Locomotion Skill Learning via Differentiable Physics [30.868690308658174]
Differentiable physics enables efficient-based optimizations of neural network (NN) controllers.
We present a practical learning framework that outputs unified NN controllers capable of tasks with significantly improved complexity and diversity.
arXiv Detail & Related papers (2022-06-06T04:01:12Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Assembly robots with optimized control stiffness through reinforcement
learning [3.4410212782758047]
We propose a methodology that uses reinforcement learning to achieve high performance in robots.
The proposed method ensures the online generation of stiffness matrices that help improve the performance of local trajectory optimization.
The effectiveness of the method was verified via experiments involving two contact-rich tasks.
arXiv Detail & Related papers (2020-02-27T15:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.