Back to Reality for Imitation Learning
- URL: http://arxiv.org/abs/2111.12867v1
- Date: Thu, 25 Nov 2021 02:03:52 GMT
- Title: Back to Reality for Imitation Learning
- Authors: Edward Johns
- Abstract summary: Imitation learning, and robot learning in general, emerged due to breakthroughs in machine learning, rather than breakthroughs in robotics.
We believe that a better metric for real-world robot learning is time efficiency, which better models the true cost to humans.
- Score: 8.57914821832517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imitation learning, and robot learning in general, emerged due to
breakthroughs in machine learning, rather than breakthroughs in robotics. As
such, evaluation metrics for robot learning are deeply rooted in those for
machine learning, and focus primarily on data efficiency. We believe that a
better metric for real-world robot learning is time efficiency, which better
models the true cost to humans. This is a call to arms to the robot learning
community to develop our own evaluation metrics, tailored towards the long-term
goals of real-world robotics.
Related papers
- Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - IRASim: Learning Interactive Real-Robot Action Simulators [24.591694756757278]
We introduce a novel method, IRASim, to generate realistic videos of a robot arm that executes a given action trajectory.
To validate the effectiveness of our method, we create a new benchmark, IRASim Benchmark, based on three real-robot datasets.
Results show that IRASim outperforms all the baseline methods and is more preferable in human evaluations.
arXiv Detail & Related papers (2024-06-20T17:50:16Z) - Advancing Household Robotics: Deep Interactive Reinforcement Learning for Efficient Training and Enhanced Performance [0.0]
Reinforcement learning, or RL, has emerged as a key robotics technology that enables robots to interact with their environment.
We present a novel method to preserve and reuse information and advice via Deep Interactive Reinforcement Learning.
arXiv Detail & Related papers (2024-05-29T01:46:50Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - DayDreamer: World Models for Physical Robot Learning [142.11031132529524]
Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn.
Many advances in robot learning rely on simulators.
In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators.
arXiv Detail & Related papers (2022-06-28T17:44:48Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Continual Learning of Visual Concepts for Robots through Limited
Supervision [9.89901717499058]
My research focuses on developing robots that continually learn in dynamic unseen environments/scenarios.
I develop machine learning models that produce State-of-the-results on benchmark datasets.
arXiv Detail & Related papers (2021-01-26T01:26:07Z) - A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives [44.45953630612019]
Recent success of machine learning in many domains has been overwhelming.
We will give a broad overview of behaviors that have been learned and used on real robots.
arXiv Detail & Related papers (2019-06-05T07:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.