STERLING: Self-Supervised Terrain Representation Learning from
Unconstrained Robot Experience
- URL: http://arxiv.org/abs/2309.15302v2
- Date: Fri, 20 Oct 2023 15:29:29 GMT
- Title: STERLING: Self-Supervised Terrain Representation Learning from
Unconstrained Robot Experience
- Authors: Haresh Karnan, Elvin Yang, Daniel Farkash, Garrett Warnell, Joydeep
Biswas, Peter Stone
- Abstract summary: We introduce Self-supervised TErrain Representation LearnING (STERLING)
STERLING is a novel approach for learning terrain representations that relies solely on easy-to-collect, unconstrained (e.g., non-expert) and unlabelled robot experience.
We evaluate STERLING features on the task of preference-aligned visual navigation and find that STERLING features perform on par with fully supervised approaches.
- Score: 43.49602846732077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Terrain awareness, i.e., the ability to identify and distinguish different
types of terrain, is a critical ability that robots must have to succeed at
autonomous off-road navigation. Current approaches that provide robots with
this awareness either rely on labeled data which is expensive to collect,
engineered features and cost functions that may not generalize, or expert human
demonstrations which may not be available. Towards endowing robots with terrain
awareness without these limitations, we introduce Self-supervised TErrain
Representation LearnING (STERLING), a novel approach for learning terrain
representations that relies solely on easy-to-collect, unconstrained (e.g.,
non-expert), and unlabelled robot experience, with no additional constraints on
data collection. STERLING employs a novel multi-modal self-supervision
objective through non-contrastive representation learning to learn relevant
terrain representations for terrain-aware navigation. Through physical robot
experiments in off-road environments, we evaluate STERLING features on the task
of preference-aligned visual navigation and find that STERLING features perform
on par with fully supervised approaches and outperform other state-of-the-art
methods with respect to preference alignment. Additionally, we perform a
large-scale experiment of autonomously hiking a 3-mile long trail which
STERLING completes successfully with only two manual interventions,
demonstrating its robustness to real-world off-road conditions.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Semi-Supervised Active Learning for Semantic Segmentation in Unknown
Environments Using Informative Path Planning [27.460481202195012]
Self-supervised and fully supervised active learning methods emerged to improve a robot's vision.
We propose a planning method for semi-supervised active learning of semantic segmentation.
We leverage an adaptive map-based planner guided towards the frontiers of unexplored space with high model uncertainty.
arXiv Detail & Related papers (2023-12-07T16:16:47Z) - Fast Traversability Estimation for Wild Visual Navigation [17.015268056925745]
We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
arXiv Detail & Related papers (2023-05-15T10:19:30Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Towards self-attention based visual navigation in the real world [0.0]
Vision guided navigation requires processing complex visual information to inform task-orientated decisions.
Deep Reinforcement Learning agents trained in simulation often exhibit unsatisfactory results when deployed in the real-world.
This is the first demonstration of a self-attention based agent successfully trained in navigating a 3D action space, using less than 4000 parameters.
arXiv Detail & Related papers (2022-09-15T04:51:42Z) - Self-Reflective Terrain-Aware Robot Adaptation for Consistent Off-Road
Ground Navigation [9.526796188292968]
Ground robots require the crucial capability of traversing unstructured and unprepared terrains to complete tasks in real-world robotics applications such as disaster response.
We propose a novel method of self-reflective terrain-aware adaptation for ground robots to generate consistent controls to navigate over unstructured off-road terrains.
arXiv Detail & Related papers (2021-11-12T14:32:22Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.