Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic
Gaussian Mixture Models
- URL: http://arxiv.org/abs/2310.15059v1
- Date: Mon, 23 Oct 2023 16:03:23 GMT
- Title: Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic
Gaussian Mixture Models
- Authors: Iman Nematollahi and Kirill Yankov and Wolfram Burgard and Tim
Welschehold
- Abstract summary: A long-standing challenge for a robotic manipulation system is adapting and generalizing its acquired motor skills to unseen environments.
We tackle this challenge employing hybrid skill models that integrate imitation and reinforcement paradigms.
We show that our method enables a robot to gain a significant zero-shot generalization to novel environments and to refine skills in the target environments faster than learning from scratch.
- Score: 21.13906762261418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A long-standing challenge for a robotic manipulation system operating in
real-world scenarios is adapting and generalizing its acquired motor skills to
unseen environments. We tackle this challenge employing hybrid skill models
that integrate imitation and reinforcement paradigms, to explore how the
learning and adaptation of a skill, along with its core grounding in the scene
through a learned keypoint, can facilitate such generalization. To that end, we
develop Keypoint Integrated Soft Actor-Critic Gaussian Mixture Models (KIS-GMM)
approach that learns to predict the reference of a dynamical system within the
scene as a 3D keypoint, leveraging visual observations obtained by the robot's
physical interactions during skill learning. Through conducting comprehensive
evaluations in both simulated and real-world environments, we show that our
method enables a robot to gain a significant zero-shot generalization to novel
environments and to refine skills in the target environments faster than
learning from scratch. Importantly, this is achieved without the need for new
ground truth data. Moreover, our method effectively copes with scene
displacements.
Related papers
- SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation [82.61572106180705]
This paper presents a unified approach using vision-language models (VLMs) to improve keypoint prediction across various garment categories.
We created a large-scale synthetic dataset using advanced simulation techniques, allowing scalable training without extensive real-world data.
Experimental results indicate that the VLM-based method significantly enhances keypoint detection accuracy and task success rates.
arXiv Detail & Related papers (2024-09-26T17:26:16Z) - Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding [34.19666841489646]
We show how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment.
We develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model.
In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work.
arXiv Detail & Related papers (2022-06-21T18:41:51Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Robot Skill Adaptation via Soft Actor-Critic Gaussian Mixture Models [29.34375999491465]
A core challenge for an autonomous agent acting in the real world is to adapt its repertoire of skills to cope with its noisy perception and dynamics.
To scale learning of skills to long-horizon tasks, robots should be able to learn and later refine their skills in a structured manner.
We proposeSAC-GMM, a novel hybrid approach that learns robot skills through a dynamical system and adapts the learned skills in their own trajectory distribution space.
arXiv Detail & Related papers (2021-11-25T15:36:11Z) - Stronger Generalization Guarantees for Robot Learning by Combining
Generative Models and Real-World Data [5.935761705025763]
We provide a framework for providing generalization guarantees by leveraging a finite dataset of real-world environments.
We demonstrate our approach on two simulated systems with nonlinear/hybrid dynamics and rich sensing modalities.
arXiv Detail & Related papers (2021-11-16T20:13:10Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.