What AIs are not Learning (and Why): Bio-Inspired Foundation Models for Robots
- URL: http://arxiv.org/abs/2404.04267v9
- Date: Thu, 4 Jul 2024 15:12:26 GMT
- Title: What AIs are not Learning (and Why): Bio-Inspired Foundation Models for Robots
- Authors: Mark Stefik,
- Abstract summary: Current smart robots are created using manual programming, mathematical models, planning frameworks, and reinforcement learning.
High cost of bipedal multi-sensory robots ("bodies") is a significant obstacle for both research and deployment.
This paper focuses on what human-compatible service robots need to know.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is hard to make robots (including telerobots) that are useful, and harder to make autonomous robots that are robust and general. Current smart robots are created using manual programming, mathematical models, planning frameworks, and reinforcement learning. These methods do not lead to the leaps in performance and generality seen with deep learning, generative AI, and foundation models (FMs). Today's robots do not learn to provide home care, to be nursing assistants, or to do household chores nearly as well as people do. Addressing the aspirational opportunities of robot service applications requires improving how they are created. The high cost of bipedal multi-sensory robots ("bodies") is a significant obstacle for both research and deployment. A deeper issue is that mainstream FMs ("minds") do not support sensing, acting, and learning in context in the real world. They do not lead to robots that communicate well or collaborate. They do not lead to robots that try to learn by experimenting, by asking others, or by imitation learning as appropriate. They do not lead to robots that know enough to be deployed widely in service applications. This paper focuses on what human-compatible service robots need to know. It recommends developing experiential (aka "robotic") FMs for bootstrapping them.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Growing from Exploration: A self-exploring framework for robots based on
foundation models [13.250831101705694]
We propose a framework named GExp, which enables robots to explore and learn autonomously without human intervention.
Inspired by the way that infants interact with the world, GExp encourages robots to understand and explore the environment with a series of self-generated tasks.
arXiv Detail & Related papers (2024-01-24T14:04:08Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - DayDreamer: World Models for Physical Robot Learning [142.11031132529524]
Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn.
Many advances in robot learning rely on simulators.
In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators.
arXiv Detail & Related papers (2022-06-28T17:44:48Z) - Back to Reality for Imitation Learning [8.57914821832517]
Imitation learning, and robot learning in general, emerged due to breakthroughs in machine learning, rather than breakthroughs in robotics.
We believe that a better metric for real-world robot learning is time efficiency, which better models the true cost to humans.
arXiv Detail & Related papers (2021-11-25T02:03:52Z) - Design and Development of Autonomous Delivery Robot [0.16863755729554888]
We present an autonomous mobile robot platform that delivers the package within the VNIT campus without any human intercommunication.
The entire pipeline of an autonomous robot working in outdoor environments is explained in this thesis.
arXiv Detail & Related papers (2021-03-16T17:57:44Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z) - A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives [44.45953630612019]
Recent success of machine learning in many domains has been overwhelming.
We will give a broad overview of behaviors that have been learned and used on real robots.
arXiv Detail & Related papers (2019-06-05T07:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.