The utility of tactile force to autonomous learning of in-hand
manipulation is task-dependent
- URL: http://arxiv.org/abs/2002.02418v1
- Date: Wed, 5 Feb 2020 06:24:40 GMT
- Title: The utility of tactile force to autonomous learning of in-hand
manipulation is task-dependent
- Authors: Romina Mir, Ali Marjaninejad, Francisco J. Valero-Cuevas
- Abstract summary: This paper evaluates the role of tactile information on autonomous learning of manipulation with a simulated 3-finger tendon-driven hand.
We compare the ability of the same learning algorithm to learn two manipulation tasks with three levels of tactile sensing.
We conclude that, in general, sensory input is useful to learning only when it is relevant to the task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tactile sensors provide information that can be used to learn and execute
manipulation tasks. Different tasks, however, might require different levels of
sensory information; which in turn likely affect learning rates and
performance. This paper evaluates the role of tactile information on autonomous
learning of manipulation with a simulated 3-finger tendon-driven hand. We
compare the ability of the same learning algorithm (Proximal Policy
Optimization, PPO) to learn two manipulation tasks (rolling a ball about the
horizontal axis with and without rotational stiffness) with three levels of
tactile sensing: no sensing, 1D normal force, and 3D force vector.
Surprisingly, and contrary to recent work on manipulation, adding 1D
force-sensing did not always improve learning rates compared to no
sensing---likely due to whether or not normal force is relevant to the task.
Nonetheless, even though 3D force-sensing increases the dimensionality of the
sensory input---which would in general hamper algorithm convergence---it
resulted in faster learning rates and better performance. We conclude that, in
general, sensory input is useful to learning only when it is relevant to the
task---as is the case of 3D force-sensing for in-hand manipulation against
gravity. Moreover, the utility of 3D force-sensing can even offset the added
computational cost of learning with higher-dimensional sensory input.
Related papers
- Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity [0.0]
Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation.
We investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation.
arXiv Detail & Related papers (2024-07-13T19:23:11Z) - Learning In-Hand Translation Using Tactile Skin With Shear and Normal Force Sensing [43.269672740168396]
We introduce a sensor model for tactile skin that enables zero-shot sim-to-real transfer of ternary shear and binary normal forces.
We conduct extensive real-world experiments to assess how tactile sensing facilitates policy adaptation to various unseen object properties.
arXiv Detail & Related papers (2024-07-10T17:52:30Z) - VITaL Pretraining: Visuo-Tactile Pretraining for Tactile and Non-Tactile Manipulation Policies [8.187196813233362]
We show how we can incorporate tactile information into imitation learning platforms to improve performance on manipulation tasks.
We show that incorporating visuo-tactile pretraining improves imitation learning performance, not only for tactile agents.
arXiv Detail & Related papers (2024-03-18T15:56:44Z) - The Power of the Senses: Generalizable Manipulation from Vision and
Touch through Masked Multimodal Learning [60.91637862768949]
We propose Masked Multimodal Learning (M3L) to fuse visual and tactile information in a reinforcement learning setting.
M3L learns a policy and visual-tactile representations based on masked autoencoding.
We evaluate M3L on three simulated environments with both visual and tactile observations.
arXiv Detail & Related papers (2023-11-02T01:33:00Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - RMBench: Benchmarking Deep Reinforcement Learning for Robotic
Manipulator Control [47.61691569074207]
Reinforcement learning is applied to solve actual complex tasks from high-dimensional, sensory inputs.
Recent progress benefits from deep learning for raw sensory signal representation.
We present RMBench, the first benchmark for robotic manipulations.
arXiv Detail & Related papers (2022-10-20T13:34:26Z) - Vision-Based Manipulators Need to Also See from Their Hands [58.398637422321976]
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.
We find that a hand-centric (eye-in-hand) perspective affords reduced observability, but it consistently improves training efficiency and out-of-distribution generalization.
arXiv Detail & Related papers (2022-03-15T18:46:18Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - Understanding Multi-Modal Perception Using Behavioral Cloning for
Peg-In-a-Hole Insertion Tasks [21.275342989110978]
In this paper, we investigate the merits of multiple sensor modalities when combined to learn a controller for real world assembly operation tasks.
We propose a multi-step-ahead loss function to improve the performance of the behavioral cloning method.
arXiv Detail & Related papers (2020-07-22T19:46:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.