Peer-Assisted Robotic Learning: A Data-Driven Collaborative Learning
Approach for Cloud Robotic Systems
- URL: http://arxiv.org/abs/2010.08303v1
- Date: Fri, 16 Oct 2020 10:52:54 GMT
- Title: Peer-Assisted Robotic Learning: A Data-Driven Collaborative Learning
Approach for Cloud Robotic Systems
- Authors: Boyi Liu, Lujia Wang, Xinquan Chen, Lexiong Huang, Cheng-Zhong Xu
- Abstract summary: Peer-Assisted Robotic Learning (PARL) in robotics is inspired by the peer-assisted learning in cognitive psychology and pedagogy.
Data and models are shared by robots to the cloud after semantic computing and training locally.
Finally, fine tune this larger shared dataset in the cloud to local robots.
- Score: 26.01178673629753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A technological revolution is occurring in the field of robotics with the
data-driven deep learning technology. However, building datasets for each local
robot is laborious. Meanwhile, data islands between local robots make data
unable to be utilized collaboratively. To address this issue, the work presents
Peer-Assisted Robotic Learning (PARL) in robotics, which is inspired by the
peer-assisted learning in cognitive psychology and pedagogy. PARL implements
data collaboration with the framework of cloud robotic systems. Both data and
models are shared by robots to the cloud after semantic computing and training
locally. The cloud converges the data and performs augmentation, integration,
and transferring. Finally, fine tune this larger shared dataset in the cloud to
local robots. Furthermore, we propose the DAT Network (Data Augmentation and
Transferring Network) to implement the data processing in PARL. DAT Network can
realize the augmentation of data from multi-local robots. We conduct
experiments on a simplified self-driving task for robots (cars). DAT Network
has a significant improvement in the augmentation in self-driving scenarios.
Along with this, the self-driving experimental results also demonstrate that
PARL is capable of improving learning effects with data collaboration of local
robots.
Related papers
- Self Supervised Deep Learning for Robot Grasping [1.1433275758269863]
We propose a self-supervised robotic setup that will train a Convolutional Neural Network (CNN)
The robot will label and collect the data during the training process.
The robot will be trained on a large data set for several hundred hours and then the trained Neural Network can be mapped onto a larger grasping robot.
arXiv Detail & Related papers (2024-10-17T23:26:55Z) - RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (early version) [25.298789781487084]
This paper introduces RoboTwin, a novel benchmark dataset combining real-world teleoperated data with synthetic data from digital twins.
We present a innovative approach to creating digital twins using AI-generated content, transforming 2D images into detailed 3D models.
Our key contributions are: 1) the RoboTwin benchmark dataset, 2) an efficient real-to-simulation pipeline, and 3) the use of language models for automatic expert-level data generation.
arXiv Detail & Related papers (2024-09-04T17:59:52Z) - CyberCortex.AI: An AI-based Operating System for Autonomous Robotics and Complex Automation [0.0]
We introduce CyberCortex AI, a robotics OS designed to enable heterogeneous AI-based robotics and complex automation applications.
CyberCortex AI is a decentralized distributed OS which enables robots to talk to each other, as well as to High Performance Computers in the cloud.
Sensory and control data from the robots is streamed towards HPC systems with the purpose of training AI algorithms, which are afterwards deployed on the robots.
arXiv Detail & Related papers (2024-09-02T13:14:50Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline
Data in the Real World [38.54892412474853]
The Real Robot Challenge 2022 served as a bridge between the reinforcement learning and robotics communities.
We asked the participants to learn two dexterous manipulation tasks involving pushing, grasping, and in-hand orientation from provided real-robot datasets.
An extensive software documentation and an initial stage based on a simulation of the real set-up made the competition particularly accessible.
arXiv Detail & Related papers (2023-08-15T12:40:56Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Scaling Robot Learning with Semantically Imagined Experience [21.361979238427722]
Recent advances in robot learning have shown promise in enabling robots to perform manipulation tasks.
One of the key contributing factors to this progress is the scale of robot data used to train the models.
We propose an alternative route and leverage text-to-image foundation models widely used in computer vision and natural language processing.
arXiv Detail & Related papers (2023-02-22T18:47:51Z) - RT-1: Robotics Transformer for Real-World Control at Scale [98.09428483862165]
We present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties.
We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks.
arXiv Detail & Related papers (2022-12-13T18:55:15Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.