Growing from Exploration: A self-exploring framework for robots based on
foundation models
- URL: http://arxiv.org/abs/2401.13462v1
- Date: Wed, 24 Jan 2024 14:04:08 GMT
- Title: Growing from Exploration: A self-exploring framework for robots based on
foundation models
- Authors: Shoujie Li and Ran Yu and Tong Wu and JunWen Zhong and Xiao-Ping Zhang
and Wenbo Ding
- Abstract summary: We propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention.
Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks.
- Score: 13.250831101705694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent robot is the ultimate goal in the robotics field. Existing works
leverage learning-based or optimization-based methods to accomplish
human-defined tasks. However, the challenge of enabling robots to explore
various environments autonomously remains unresolved. In this work, we propose
a framework named GExp, which enables robots to explore and learn autonomously
without human intervention. To achieve this goal, we devise modules including
self-exploration, knowledge-base-building, and close-loop feedback based on
foundation models. Inspired by the way that infants interact with the world,
GExp encourages robots to understand and explore the environment with a series
of self-generated tasks. During the process of exploration, the robot will
acquire skills from beneficial experiences that are useful in the future. GExp
provides robots with the ability to solve complex tasks through
self-exploration. GExp work is independent of prior interactive knowledge and
human intervention, allowing it to adapt directly to different scenarios,
unlike previous studies that provided in-context examples as few-shot learning.
In addition, we propose a workflow of deploying the real-world robot system
with self-learned skills as an embodied assistant.
Related papers
- Correspondence learning between morphologically different robots via
task demonstrations [2.1374208474242815]
We propose a method to learn correspondences among two or more robots that may have different morphologies.
A fixed-based manipulator robot with joint control and a differential drive mobile robot can be addressed within the proposed framework.
We provide a proof-of-the-concept realization of correspondence learning between a real manipulator robot and a simulated mobile robot.
arXiv Detail & Related papers (2023-10-20T12:42:06Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives [44.45953630612019]
Recent success of machine learning in many domains has been overwhelming.
We will give a broad overview of behaviors that have been learned and used on real robots.
arXiv Detail & Related papers (2019-06-05T07:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.