Semi-supervised Gated Recurrent Neural Networks for Robotic Terrain
Classification
- URL: http://arxiv.org/abs/2011.11913v1
- Date: Tue, 24 Nov 2020 06:25:19 GMT
- Title: Semi-supervised Gated Recurrent Neural Networks for Robotic Terrain
Classification
- Authors: Ahmadreza Ahmadi, T{\o}nnes Nygaard, Navinda Kottege, David Howard,
Nicolas Hudson
- Abstract summary: We show how highly capable machine learning techniques, namely gated recurrent neural networks, allow our target legged robot to correctly classify the terrain it traverses.
We show how raw unlabelled data is used to improve significantly the classification results in a semi-supervised model.
- Score: 4.703075836560585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Legged robots are popular candidates for missions in challenging terrains due
to the wide variety of locomotion strategies they can employ. Terrain
classification is a key enabling technology for autonomous legged robots, as it
allows the robot to harness their innate flexibility to adapt their behaviour
to the demands of their operating environment. In this paper, we show how
highly capable machine learning techniques, namely gated recurrent neural
networks, allow our target legged robot to correctly classify the terrain it
traverses in both supervised and semi-supervised fashions. Tests on a benchmark
data set shows that our time-domain classifiers are well capable of dealing
with raw and variable-length data with small amount of labels and perform to a
level far exceeding the frequency-domain classifiers. The classification
results on our own extended data set opens up a range of high-performance
behaviours that are specific to those environments. Furthermore, we show how
raw unlabelled data is used to improve significantly the classification results
in a semi-supervised model.
Related papers
- Loss Regularizing Robotic Terrain Classification [1.5728609542259502]
This paper proposes a new semi-supervised method for terrain classification of legged robots.
The proposed method has a stacked Long Short-Term Memory architecture, including a new loss regularization.
arXiv Detail & Related papers (2024-03-20T15:57:44Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - Semi-Supervised Active Learning for Semantic Segmentation in Unknown
Environments Using Informative Path Planning [27.460481202195012]
Self-supervised and fully supervised active learning methods emerged to improve a robot's vision.
We propose a planning method for semi-supervised active learning of semantic segmentation.
We leverage an adaptive map-based planner guided towards the frontiers of unexplored space with high model uncertainty.
arXiv Detail & Related papers (2023-12-07T16:16:47Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - RT-1: Robotics Transformer for Real-World Control at Scale [98.09428483862165]
We present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties.
We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks.
arXiv Detail & Related papers (2022-12-13T18:55:15Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub
Robot [20.813028212068424]
We study different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.
We propose a pipeline for fast instance segmentation learning for robotic applications where data come in stream.
We benchmark the proposed pipeline on two datasets and we deploy it on a real robot, iCub humanoid.
arXiv Detail & Related papers (2022-06-27T17:14:04Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Self-Supervised Drivable Area and Road Anomaly Segmentation using RGB-D
Data for Robotic Wheelchairs [26.110522390201094]
We develop a pipeline that can automatically generate segmentation labels for drivable areas and road anomalies.
Our proposed automatic labeling pipeline achieves an impressive speed-up compared to manual labeling.
Our proposed self-supervised approach exhibits more robust and accurate results than the state-of-the-art traditional algorithms.
arXiv Detail & Related papers (2020-07-12T10:12:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.