Learning robot motor skills with mixed reality
- URL: http://arxiv.org/abs/2203.11324v1
- Date: Mon, 21 Mar 2022 20:25:40 GMT
- Title: Learning robot motor skills with mixed reality
- Authors: Eric Rosen, Sreehari Rammohan, Devesh Jha
- Abstract summary: Mixed Reality (MR) has recently shown great success as an intuitive interface for enabling end-users to teach robots.
We propose a learning framework where end-users teach robots a) motion demonstrations, b) task constraints, c) planning representations, and d) object information.
We hypothesize that conveying this world knowledge will be intuitive with an MR interface, and that a sample-efficient motor skill learning framework will enable robots to effectively solve complex tasks.
- Score: 0.8121462458089141
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mixed Reality (MR) has recently shown great success as an intuitive interface
for enabling end-users to teach robots. Related works have used MR interfaces
to communicate robot intents and beliefs to a co-located human, as well as
developed algorithms for taking multi-modal human input and learning complex
motor behaviors. Even with these successes, enabling end-users to teach robots
complex motor tasks still poses a challenge because end-user communication is
highly task dependent and world knowledge is highly varied. We propose a
learning framework where end-users teach robots a) motion demonstrations, b)
task constraints, c) planning representations, and d) object information, all
of which are integrated into a single motor skill learning framework based on
Dynamic Movement Primitives (DMPs). We hypothesize that conveying this world
knowledge will be intuitive with an MR interface, and that a sample-efficient
motor skill learning framework which incorporates varied modalities of world
knowledge will enable robots to effectively solve complex tasks.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - Continual Skill and Task Learning via Dialogue [3.3511259017219297]
Continual and interactive robot learning is a challenging problem as the robot is present with human users.
We present a framework for robots to query and learn visuo-motor robot skills and task relevant information via natural language dialog interactions with human users.
arXiv Detail & Related papers (2024-09-05T01:51:54Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models [23.945922720555146]
We propose a system to achieve incremental learning of complex behavior from natural interaction.
We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6.
arXiv Detail & Related papers (2023-09-08T13:29:05Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Learning and Reasoning for Robot Dialog and Navigation Tasks [44.364322669414776]
We develop algorithms for robot task completions, while looking into the complementary strengths of reinforcement learning and probabilistic reasoning techniques.
The robots learn from trial-and-error experiences to augment their declarative knowledge base.
We have implemented and evaluated the developed algorithms using mobile robots conducting dialog and navigation tasks.
arXiv Detail & Related papers (2020-05-20T03:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.