Enhancing Tactile-based Reinforcement Learning for Robotic Control
- URL: http://arxiv.org/abs/2510.21609v1
- Date: Fri, 24 Oct 2025 16:15:05 GMT
- Title: Enhancing Tactile-based Reinforcement Learning for Robotic Control
- Authors: Elle Miller, Trevor McInroe, David Abel, Oisin Mac Aodha, Sethu Vijayakumar,
- Abstract summary: We develop self-supervised learning (SSL) methodologies to more effectively harness tactile observations.<n>We empirically demonstrate that sparse binary tactile signals are critical for dexterity.<n>We release the Robot Tactile Olympiad (RoTO) benchmark to standardise and promote future research in tactile-based manipulation.
- Score: 32.565866574593635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving safe, reliable real-world robotic manipulation requires agents to evolve beyond vision and incorporate tactile sensing to overcome sensory deficits and reliance on idealised state information. Despite its potential, the efficacy of tactile sensing in reinforcement learning (RL) remains inconsistent. We address this by developing self-supervised learning (SSL) methodologies to more effectively harness tactile observations, focusing on a scalable setup of proprioception and sparse binary contacts. We empirically demonstrate that sparse binary tactile signals are critical for dexterity, particularly for interactions that proprioceptive control errors do not register, such as decoupled robot-object motions. Our agents achieve superhuman dexterity in complex contact tasks (ball bouncing and Baoding ball rotation). Furthermore, we find that decoupling the SSL memory from the on-policy memory can improve performance. We release the Robot Tactile Olympiad (RoTO) benchmark to standardise and promote future research in tactile-based manipulation. Project page: https://elle-miller.github.io/tactile_rl
Related papers
- Tactile Memory with Soft Robot: Robust Object Insertion via Masked Encoding and Soft Wrist [10.982180941605256]
We introduce Tactile Memory with Soft Robot (TaSo-bot), a system that integrates a soft wrist with retrieval-based control to enable safe and robust manipulation.<n>The core of this system is the Masked Tactile Trajectory Transformer (MATtext3$), which jointly models interactions between robot actions, tactile feedback, force-torque measurements, and proprioceptive signals.<n>MATtext3$ achieves higher success rates than the baselines over all conditions and shows remarkable capability to adapt to unseen pegs and conditions.
arXiv Detail & Related papers (2026-01-27T07:04:01Z) - Feel the Force: Contact-Driven Learning from Humans [52.36160086934298]
Controlling fine-grained forces during manipulation remains a core challenge in robotics.<n>We present FeelTheForce, a robot learning system that models human tactile behavior to learn force-sensitive manipulation.<n>Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks.
arXiv Detail & Related papers (2025-06-02T17:57:52Z) - Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation [34.47272224723296]
We present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed.<n>Unlike previous simulators that operate at sub-real-time speeds with limited parallelization, Taccel provides precise physics simulation and realistic tactile signals.<n>These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development, potentially transforming how robots interact with and understand their physical environment.
arXiv Detail & Related papers (2025-04-17T12:57:11Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications to Coverage Control [72.81786007015471]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.<n>CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.<n> Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing [15.970078821894758]
We introduce a system that leverages visual and tactile sensory inputs to enable dexterous in-hand manipulation.
Robot Synesthesia is a novel point cloud-based tactile representation inspired by human tactile-visual synesthesia.
arXiv Detail & Related papers (2023-12-04T12:35:43Z) - TIAGo RL: Simulated Reinforcement Learning Environments with Tactile
Data for Mobile Robots [1.5193212081459284]
Deep Reinforcement Learning (DRL) produced promising results for learning complex behavior in various domains.
We present our open-source reinforcement learning environments for the TIAGo service robot.
arXiv Detail & Related papers (2023-11-13T11:50:30Z) - MimicTouch: Leveraging Multi-modal Human Tactile Demonstrations for Contact-rich Manipulation [8.738889129462013]
"MimicTouch" is a novel framework for learning policies directly from demonstrations provided by human users with their hands.<n>The key innovations are i) a human tactile data collection system which collects multi-modal tactile dataset for learning human's tactile-guided control strategy, and ii) an imitation learning-based framework for learning human's tactile-guided control strategy through such data.
arXiv Detail & Related papers (2023-10-25T18:34:06Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.