Geometrically-Aware One-Shot Skill Transfer of Category-Level Objects
- URL: http://arxiv.org/abs/2503.15371v1
- Date: Wed, 19 Mar 2025 16:10:17 GMT
- Title: Geometrically-Aware One-Shot Skill Transfer of Category-Level Objects
- Authors: Cristiana de Farias, Luis Figueredo, Riddhiman Laha, Maxime Adjigble, Brahim Tamadazte, Rustam Stolkin, Sami Haddadin, Naresh Marturi,
- Abstract summary: We propose a new skill transfer framework, which enables a robot to transfer complex object manipulation skills and constraints from a single human demonstration.<n>Our approach addresses the challenge of skill acquisition and task execution by deriving geometric representations from demonstrations focusing on object-centric interactions.<n>We validate the effectiveness and adaptability of our approach through extensive experiments, demonstrating successful skill transfer and task execution in diverse real-world environments without requiring additional training.
- Score: 18.978751760636563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic manipulation of unfamiliar objects in new environments is challenging and requires extensive training or laborious pre-programming. We propose a new skill transfer framework, which enables a robot to transfer complex object manipulation skills and constraints from a single human demonstration. Our approach addresses the challenge of skill acquisition and task execution by deriving geometric representations from demonstrations focusing on object-centric interactions. By leveraging the Functional Maps (FM) framework, we efficiently map interaction functions between objects and their environments, allowing the robot to replicate task operations across objects of similar topologies or categories, even when they have significantly different shapes. Additionally, our method incorporates a Task-Space Imitation Algorithm (TSIA) which generates smooth, geometrically-aware robot paths to ensure the transferred skills adhere to the demonstrated task constraints. We validate the effectiveness and adaptability of our approach through extensive experiments, demonstrating successful skill transfer and task execution in diverse real-world environments without requiring additional training.
Related papers
- Few-Shot Vision-Language Action-Incremental Policy Learning [55.07841353049953]
Transformer-based robotic manipulation methods utilize multi-view spatial representations and language instructions to learn robot motion trajectories.
Existing methods lack the capability for continuous learning on new tasks with only a few demonstrations.
We develop a Task-prOmpt graPh evolutIon poliCy (TOPIC) to address these issues.
arXiv Detail & Related papers (2025-04-22T01:30:47Z) - Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids [61.033745979145536]
This work investigates the key challenges in applying reinforcement learning to solve a collection of contact-rich manipulation tasks on a humanoid embodiment.<n>Our main contributions include an automated real-to-sim tuning module that brings the simulated environment closer to the real world.<n>We show promising results on three humanoid dexterous manipulation tasks, with ablation studies on each technique.
arXiv Detail & Related papers (2025-02-27T18:59:52Z) - Semantic-Geometric-Physical-Driven Robot Manipulation Skill Transfer via Skill Library and Tactile Representation [6.324290412766366]
skill library framework based on knowledge graphs endows robots with high-level skill awareness and spatial semantic understanding.
At the motion level, an adaptive trajectory transfer method is developed using the A* algorithm and the skill library.
At the physical level, we introduce an adaptive contour extraction and posture perception method based on tactile perception.
arXiv Detail & Related papers (2024-11-18T16:42:07Z) - Twisting Lids Off with Two Hands [82.21668778600414]
We show how policies trained in simulation can be effectively and efficiently transferred to the real world.
Specifically, we consider the problem of twisting lids of various bottle-like objects with two hands.
This is the first sim-to-real RL system that enables such capabilities on bimanual multi-fingered hands.
arXiv Detail & Related papers (2024-03-04T18:59:30Z) - Active Task Randomization: Learning Robust Skills via Unsupervised
Generation of Diverse and Feasible Tasks [37.73239471412444]
We introduce Active Task Randomization (ATR), an approach that learns robust skills through the unsupervised generation of training tasks.
ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks.
We demonstrate that the learned skills can be composed by a task planner to solve unseen sequential manipulation problems based on visual inputs.
arXiv Detail & Related papers (2022-11-11T11:24:55Z) - Learning Tool Morphology for Contact-Rich Manipulation Tasks with
Differentiable Simulation [27.462052737553055]
We present an end-to-end framework to automatically learn tool morphology for contact-rich manipulation tasks by leveraging differentiable physics simulators.
In our approach, we instead only need to define the objective with respect to the task performance and enable learning a robust morphology by randomizing the task variations.
We demonstrate the effectiveness of our method for designing new tools in several scenarios such as winding ropes, flipping a box and pushing peas onto a scoop in simulation.
arXiv Detail & Related papers (2022-11-04T00:57:36Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Learning Relative Interactions through Imitation [0.0]
We show that a simple network, with relatively little training data, is able to reach very good performance on the fixed-pose task.
We also explore the effect of ambiguities in the sensor readings, in particular caused by symmetries in the target object, on the behaviour of the learned controller.
arXiv Detail & Related papers (2021-09-24T15:18:34Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Towards Coordinated Robot Motions: End-to-End Learning of Motion
Policies on Transform Trees [63.31965375413414]
We propose to solve multi-task problems through learning structured policies from human demonstrations.
Our structured policy is inspired by RMPflow, a framework for combining subtask policies on different spaces.
We derive an end-to-end learning objective function that is suitable for the multi-task problem.
arXiv Detail & Related papers (2020-12-24T22:46:22Z) - Learning compositional models of robot skills for task and motion
planning [39.36562555272779]
We learn to use sensorimotor primitives to solve complex long-horizon manipulation problems.
We use state-of-the-art methods for active learning and sampling.
We evaluate our approach both in simulation and in the real world through measuring the quality of the selected primitive actions.
arXiv Detail & Related papers (2020-06-08T20:45:34Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.