Assembly robots with optimized control stiffness through reinforcement
learning
- URL: http://arxiv.org/abs/2002.12207v1
- Date: Thu, 27 Feb 2020 15:54:43 GMT
- Title: Assembly robots with optimized control stiffness through reinforcement
learning
- Authors: Masahide Oikawa, Kyo Kutsuzawa, Sho Sakaino, Toshiaki Tsuji
- Abstract summary: We propose a methodology that uses reinforcement learning to achieve high performance in robots.
The proposed method ensures the online generation of stiffness matrices that help improve the performance of local trajectory optimization.
The effectiveness of the method was verified via experiments involving two contact-rich tasks.
- Score: 3.4410212782758047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an increased demand for task automation in robots. Contact-rich
tasks, wherein multiple contact transitions occur in a series of operations,
are extensively being studied to realize high accuracy. In this study, we
propose a methodology that uses reinforcement learning (RL) to achieve high
performance in robots for the execution of assembly tasks that require precise
contact with objects without causing damage. The proposed method ensures the
online generation of stiffness matrices that help improve the performance of
local trajectory optimization. The method has an advantage of rapid response
owing to short sampling time of the trajectory planning. The effectiveness of
the method was verified via experiments involving two contact-rich tasks. The
results indicate that the proposed method can be implemented in various
contact-rich manipulations. A demonstration video shows the performance.
(https://youtu.be/gxSCl7Tp4-0)
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Affordance-based Robot Manipulation with Flow Matching [6.863932324631107]
Our framework unifies affordance model learning and trajectory generation with flow matching for robot manipulation.
Our evaluation highlights that the proposed prompt tuning method for learning manipulation affordance with language prompter achieves competitive performance.
Our framework seamlessly unifies affordance model learning and trajectory generation with flow matching for robot manipulation.
arXiv Detail & Related papers (2024-09-02T09:11:28Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition [10.072992621244042]
We propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL)
To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process.
We demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks.
arXiv Detail & Related papers (2023-11-19T10:19:22Z) - Demonstration-Guided Reinforcement Learning with Efficient Exploration
for Task Automation of Surgical Robot [54.80144694888735]
We introduce Demonstration-guided EXploration (DEX), an efficient reinforcement learning algorithm.
Our method estimates expert-like behaviors with higher values to facilitate productive interactions.
Experiments on $10$ surgical manipulation tasks from SurRoL, a comprehensive surgical simulation platform, demonstrate significant improvements.
arXiv Detail & Related papers (2023-02-20T05:38:54Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Efficient Robotic Manipulation Through Offline-to-Online Reinforcement
Learning and Goal-Aware State Information [5.604859261995801]
We propose a unified offline-to-online RL framework that resolves the transition performance drop issue.
We introduce goal-aware state information to the RL agent, which can greatly reduce task complexity and accelerate policy learning.
Our framework achieves great training efficiency and performance compared with the state-of-the-art methods in multiple robotic manipulation tasks.
arXiv Detail & Related papers (2021-10-21T05:34:25Z) - Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance
Action Space [7.116986445066885]
Reinforcement Learning has led to promising results on a range of challenging decision-making tasks.
Fast human-like adaptive control methods can optimize complex robotic interactions, yet fail to integrate multimodal feedback needed for unstructured tasks.
We propose to factor the learning problem in a hierarchical learning and adaption architecture to get the best of both worlds.
arXiv Detail & Related papers (2021-10-19T12:09:02Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Constrained-Space Optimization and Reinforcement Learning for Complex
Tasks [42.648636742651185]
Learning from Demonstration is increasingly used for transferring operator manipulation skills to robots.
This paper presents a constrained-space optimization and reinforcement learning scheme for managing complex tasks.
arXiv Detail & Related papers (2020-04-01T21:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.