Towards Human-level Dexterity via Robot Learning
- URL: http://arxiv.org/abs/2507.09117v1
- Date: Sat, 12 Jul 2025 02:22:55 GMT
- Title: Towards Human-level Dexterity via Robot Learning
- Authors: Gagan Khandate,
- Abstract summary: Dexterous intelligence is a pinnacle of human physical intelligence and emergent higher-order cognitive skills.<n>Many million years were spent co-evolving the human brain and hands including rich tactile sensing.<n>This thesis explores a new paradigm of using visuo-tactile human demonstrations for dexterity, introducing corresponding imitation learning techniques.
- Score: 1.3910668204452978
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dexterous intelligence -- the ability to perform complex interactions with multi-fingered hands -- is a pinnacle of human physical intelligence and emergent higher-order cognitive skills. However, contrary to Moravec's paradox, dexterous intelligence in humans appears simple only superficially. Many million years were spent co-evolving the human brain and hands including rich tactile sensing. Achieving human-level dexterity with robotic hands has long been a fundamental goal in robotics and represents a critical milestone toward general embodied intelligence. In this pursuit, computational sensorimotor learning has made significant progress, enabling feats such as arbitrary in-hand object reorientation. However, we observe that achieving higher levels of dexterity requires overcoming very fundamental limitations of computational sensorimotor learning. I develop robot learning methods for highly dexterous multi-fingered manipulation by directly addressing these limitations at their root cause. Chiefly, through key studies, this disseration progressively builds an effective framework for reinforcement learning of dexterous multi-fingered manipulation skills. These methods adopt structured exploration, effectively overcoming the limitations of random exploration in reinforcement learning. The insights gained culminate in a highly effective reinforcement learning that incorporates sampling-based planning for direct exploration. Additionally, this thesis explores a new paradigm of using visuo-tactile human demonstrations for dexterity, introducing corresponding imitation learning techniques.
Related papers
- Towards Human-level Intelligence via Human-like Whole-Body Manipulation [10.199110135230674]
We present Astribot Suite, a robot learning suite for whole-body manipulation aimed at general daily tasks across diverse environments.<n>Our results show that Astribot's cohesive integration of embodiment, teleoperation interface, and learning pipeline marks a significant step towards real-world, general-purpose whole-body robotic manipulation.
arXiv Detail & Related papers (2025-07-23T02:23:41Z) - Interactive Imitation Learning for Dexterous Robotic Manipulation: Challenges and Perspectives -- A Survey [0.8287206589886879]
Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics.<n>Survey reviews existing learning-based methods for dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches.<n>A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training.
arXiv Detail & Related papers (2025-05-30T12:19:32Z) - Dexterous Manipulation through Imitation Learning: A Survey [28.04590024211786]
Imitation learning (IL) offers an alternative by allowing robots to acquire dexterous manipulation skills directly from expert demonstrations.<n>IL captures fine-grained coordination and contact dynamics while bypassing the need for explicit modeling and large-scale trial-and-error.<n>Our goal is to offer researchers and practitioners a comprehensive introduction to this rapidly evolving domain.
arXiv Detail & Related papers (2025-04-04T15:14:38Z) - SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - MimicTouch: Leveraging Multi-modal Human Tactile Demonstrations for Contact-rich Manipulation [8.738889129462013]
"MimicTouch" is a novel framework for learning policies directly from demonstrations provided by human users with their hands.<n>The key innovations are i) a human tactile data collection system which collects multi-modal tactile dataset for learning human's tactile-guided control strategy, and ii) an imitation learning-based framework for learning human's tactile-guided control strategy through such data.
arXiv Detail & Related papers (2023-10-25T18:34:06Z) - CasIL: Cognizing and Imitating Skills via a Dual Cognition-Action
Architecture [20.627616015484648]
Existing imitation learning approaches for robots still grapple with sub-optimal performance in complex tasks.
Heuristically, we extend the usual notion of action to a dual Cognition (high-level)-Action (low-level) architecture.
We propose a novel skill IL framework through human-robot interaction, called Cognition-Action-based Skill Imitation Learning (CasIL)
arXiv Detail & Related papers (2023-09-28T09:53:05Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - RoboPianist: Dexterous Piano Playing with Deep Reinforcement Learning [61.10744686260994]
We introduce RoboPianist, a system that enables simulated anthropomorphic hands to learn an extensive repertoire of 150 piano pieces.
We additionally introduce an open-sourced environment, benchmark of tasks, interpretable evaluation metrics, and open challenges for future study.
arXiv Detail & Related papers (2023-04-09T03:53:05Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - NeuroCERIL: Robotic Imitation Learning via Hierarchical Cause-Effect
Reasoning in Programmable Attractor Neural Networks [2.0646127669654826]
We present NeuroCERIL, a brain-inspired neurocognitive architecture that uses a novel hypothetico-deductive reasoning procedure.
We show that NeuroCERIL can learn various procedural skills in a simulated robotic imitation learning domain.
We conclude that NeuroCERIL is a viable neural model of human-like imitation learning.
arXiv Detail & Related papers (2022-11-11T19:56:11Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.