Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition
- URL: http://arxiv.org/abs/2311.11287v1
- Date: Sun, 19 Nov 2023 10:19:22 GMT
- Title: Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition
- Authors: Zihao Liu, Xing Liu, Yizhai Zhang, Zhengxiong Liu and Panfeng Huang
- Abstract summary: We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
- Score: 10.072992621244042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic manipulation holds the potential to replace humans in the execution
of tedious or dangerous tasks. However, control-based approaches are not
suitable due to the difficulty of formally describing open-world manipulation
in reality, and the inefficiency of existing learning methods. Thus, applying
manipulation in a wide range of scenarios presents significant challenges. In
this study, we propose a novel method for skill learning in robotic
manipulation called Tactile Active Inference Reinforcement Learning
(Tactile-AIRL), aimed at achieving efficient training. To enhance the
performance of reinforcement learning (RL), we introduce active inference,
which integrates model-based techniques and intrinsic curiosity into the RL
process. This integration improves the algorithm's training efficiency and
adaptability to sparse rewards. Additionally, we utilize a vision-based tactile
sensor to provide detailed perception for manipulation tasks. Finally, we
employ a model-based approach to imagine and plan appropriate actions through
free energy minimization. Simulation results demonstrate that our method
achieves significantly high training efficiency in non-prehensile objects
pushing tasks. It enables agents to excel in both dense and sparse reward tasks
with just a few interaction episodes, surpassing the SAC baseline. Furthermore,
we conduct physical experiments on a gripper screwing task using our method,
which showcases the algorithm's rapid learning capability and its potential for
practical applications.
Related papers
- Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning [47.785786984974855]
We present a human-in-the-loop vision-based RL system that demonstrates impressive performance on a diverse set of dexterous manipulation tasks.
Our approach integrates demonstrations and human corrections, efficient RL algorithms, and other system-level design choices to learn policies.
We show that our method significantly outperforms imitation learning baselines and prior RL approaches, with an average 2x improvement in success rate and 1.8x faster execution.
arXiv Detail & Related papers (2024-10-29T08:12:20Z) - Affordance-Guided Reinforcement Learning via Visual Prompting [51.361977466993345]
Keypoint-based Affordance Guidance for Improvements (KAGI) is a method leveraging rewards shaped by vision-language models (VLMs) for autonomous RL.
On real-world manipulation tasks specified by natural language descriptions, KAGI improves the sample efficiency of autonomous RL and enables successful task completion in 20K online fine-tuning steps.
arXiv Detail & Related papers (2024-07-14T21:41:29Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Exploiting Symmetry and Heuristic Demonstrations in Off-policy
Reinforcement Learning for Robotic Manipulation [1.7901837062462316]
This paper aims to define and incorporate the natural symmetry present in physical robotic environments.
The proposed method is validated via two point-to-point reaching tasks of an industrial arm, with and without an obstacle.
A comparison study between the proposed method and a traditional off-policy reinforcement learning algorithm indicates its advantage in learning performance and potential value for applications.
arXiv Detail & Related papers (2023-04-12T11:38:01Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Maximum Entropy Model-based Reinforcement Learning [0.0]
This work connects exploration techniques and model-based reinforcement learning.
We have designed a novel exploration method that takes into account features of the model-based approach.
We also demonstrate through experiments that our method significantly improves the performance of the model-based algorithm Dreamer.
arXiv Detail & Related papers (2021-12-02T13:07:29Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - An Empowerment-based Solution to Robotic Manipulation Tasks with Sparse
Rewards [14.937474939057596]
It is important for robotic manipulators to learn to accomplish tasks even if they are only provided with very sparse instruction signals.
This paper proposes an intrinsic motivation approach that can be easily integrated into any standard reinforcement learning algorithm.
arXiv Detail & Related papers (2020-10-15T19:06:21Z) - Emergent Real-World Robotic Skills via Unsupervised Off-Policy
Reinforcement Learning [81.12201426668894]
We develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks.
We show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible.
We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.
arXiv Detail & Related papers (2020-04-27T17:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.