LodeStar: Long-horizon Dexterity via Synthetic Data Augmentation from Human Demonstrations
- URL: http://arxiv.org/abs/2508.17547v1
- Date: Sun, 24 Aug 2025 22:57:16 GMT
- Title: LodeStar: Long-horizon Dexterity via Synthetic Data Augmentation from Human Demonstrations
- Authors: Weikang Wan, Jiawei Fu, Xiaodi Yuan, Yifeng Zhu, Hao Su,
- Abstract summary: Long-horizon manipulation tasks require both physical dexterity and seamless sequencing of manipulation skills.<n>We propose a learning framework and system LodeStar that automatically decomposes task demonstrations into semantically meaningful skills.<n>Our approach significantly improves task performance and robustness compared to previous baselines.
- Score: 20.300415135664718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing robotic systems capable of robustly executing long-horizon manipulation tasks with human-level dexterity is challenging, as such tasks require both physical dexterity and seamless sequencing of manipulation skills while robustly handling environment variations. While imitation learning offers a promising approach, acquiring comprehensive datasets is resource-intensive. In this work, we propose a learning framework and system LodeStar that automatically decomposes task demonstrations into semantically meaningful skills using off-the-shelf foundation models, and generates diverse synthetic demonstration datasets from a few human demos through reinforcement learning. These sim-augmented datasets enable robust skill training, with a Skill Routing Transformer (SRT) policy effectively chaining the learned skills together to execute complex long-horizon manipulation tasks. Experimental evaluations on three challenging real-world long-horizon dexterous manipulation tasks demonstrate that our approach significantly improves task performance and robustness compared to previous baselines. Videos are available at lodestar-robot.github.io.
Related papers
- Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids [61.033745979145536]
This work investigates the key challenges in applying reinforcement learning to solve a collection of contact-rich manipulation tasks on a humanoid embodiment.<n>Our main contributions include an automated real-to-sim tuning module that brings the simulated environment closer to the real world.<n>We show promising results on three humanoid dexterous manipulation tasks, with ablation studies on each technique.
arXiv Detail & Related papers (2025-02-27T18:59:52Z) - SKIL: Semantic Keypoint Imitation Learning for Generalizable Data-efficient Manipulation [12.720334726151739]
We propose a framework which automatically obtains semantic keypoints with the help of vision foundation models.<n>SKIL forms the descriptor of semantic keypoints that enables efficient imitation learning of complex robotic tasks.<n>SKIL achieves a mean success rate of 70% with as few as 30 demonstrations.
arXiv Detail & Related papers (2025-01-24T11:11:53Z) - Single-Shot Learning of Stable Dynamical Systems for Long-Horizon Manipulation Tasks [48.54757719504994]
This paper focuses on improving task success rates while reducing the amount of training data needed.
Our approach introduces a novel method that segments long-horizon demonstrations into discrete steps defined by waypoints and subgoals.
We validate our approach through both simulation and real-world experiments, demonstrating effective transfer from simulation to physical robotic platforms.
arXiv Detail & Related papers (2024-10-01T19:49:56Z) - VITAL: Interactive Few-Shot Imitation Learning via Visual Human-in-the-Loop Corrections [10.49712834719005]
Imitation Learning (IL) has emerged as a powerful approach in robotics, allowing robots to acquire new skills by mimicking human actions.<n>Despite its potential, the data collection process for IL remains a significant challenge due to logistical difficulties and high costs associated with obtaining high-quality demonstrations.<n>We propose a large-scale data generation from a handful of demonstrations through data augmentation in simulation.
arXiv Detail & Related papers (2024-07-30T23:29:47Z) - Benchmarking Offline Reinforcement Learning on Real-Robot Hardware [35.29390454207064]
Dexterous manipulation in particular remains an open problem in its general form.
We propose a benchmark including a large collection of data for offline learning from a dexterous manipulation platform on two tasks.
We evaluate prominent open-sourced offline reinforcement learning algorithms on the datasets and provide a reproducible experimental setup for offline reinforcement learning on real systems.
arXiv Detail & Related papers (2023-07-28T17:29:49Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Hierarchical Few-Shot Imitation with Skill Transition Models [66.81252581083199]
Few-shot Imitation with Skill Transition Models (FIST) is an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks.
We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments.
arXiv Detail & Related papers (2021-07-19T15:56:01Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.