Learning Off-Road Terrain Traversability with Self-Supervisions Only
- URL: http://arxiv.org/abs/2305.18896v1
- Date: Tue, 30 May 2023 09:51:27 GMT
- Title: Learning Off-Road Terrain Traversability with Self-Supervisions Only
- Authors: Junwon Seo, Sungdae Sim, and Inwook Shim
- Abstract summary: Estimating the traversability of terrain should be reliable and accurate in diverse conditions for autonomous driving in off-road environments.
We introduce a method for learning traversability from images that utilizes only self-supervision and no manual labels.
- Score: 2.4316550366482357
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the traversability of terrain should be reliable and accurate in
diverse conditions for autonomous driving in off-road environments. However,
learning-based approaches often yield unreliable results when confronted with
unfamiliar contexts, and it is challenging to obtain manual annotations
frequently for new circumstances. In this paper, we introduce a method for
learning traversability from images that utilizes only self-supervision and no
manual labels, enabling it to easily learn traversability in new circumstances.
To this end, we first generate self-supervised traversability labels from past
driving trajectories by labeling regions traversed by the vehicle as highly
traversable. Using the self-supervised labels, we then train a neural network
that identifies terrains that are safe to traverse from an image using a
one-class classification algorithm. Additionally, we supplement the limitations
of self-supervised labels by incorporating methods of self-supervised learning
of visual representations. To conduct a comprehensive evaluation, we collect
data in a variety of driving environments and perceptual conditions and show
that our method produces reliable estimations in various environments. In
addition, the experimental results validate that our method outperforms other
self-supervised traversability estimation methods and achieves comparable
performances with supervised learning methods trained on manually labeled data.
Related papers
- A review on discriminative self-supervised learning methods [6.24302896438145]
Self-supervised learning has emerged as a method to extract robust features from unlabeled data.
This paper provides a review of discriminative approaches of self-supervised learning within the domain of computer vision.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - Variational Self-Supervised Contrastive Learning Using Beta Divergence [0.0]
We present a contrastive self-supervised learning method which is robust to data noise, grounded in the domain of variational methods.
We demonstrate the effectiveness of the proposed method through rigorous experiments including linear evaluation and fine-tuning scenarios with multi-label datasets in the face understanding domain.
arXiv Detail & Related papers (2023-09-05T17:21:38Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - Unsupervised Driving Event Discovery Based on Vehicle CAN-data [62.997667081978825]
This work presents a simultaneous clustering and segmentation approach for vehicle CAN-data that identifies common driving events in an unsupervised manner.
We evaluate our approach with a dataset of real Tesla Model 3 vehicle CAN-data and a two-hour driving session that we annotated with different driving events.
arXiv Detail & Related papers (2023-01-12T13:10:47Z) - ScaTE: A Scalable Framework for Self-Supervised Traversability
Estimation in Unstructured Environments [7.226357394861987]
In this work, we introduce a scalable framework for learning self-supervised traversability.
We train a neural network that predicts the proprioceptive experience that a vehicle would undergo from 3D point clouds.
With driving data of various vehicles gathered from simulation and the real world, we show that our framework is capable of learning the self-supervised traversability of various vehicles.
arXiv Detail & Related papers (2022-09-14T09:52:26Z) - Pushing the Limits of Learning-based Traversability Analysis for
Autonomous Driving on CPU [1.841057463340778]
This paper proposes and evaluates a real-time machine learning-based Traversability Analysis method.
We show that integrating a new set of geometric and visual features and focusing on important implementation details enables a noticeable boost in performance and reliability.
The proposed approach has been compared with state-of-the-art Deep Learning approaches on a public dataset of outdoor driving scenarios.
arXiv Detail & Related papers (2022-06-07T07:57:34Z) - Multimodal Detection of Unknown Objects on Roads for Autonomous Driving [4.3310896118860445]
We propose a novel pipeline to detect unknown objects.
We make use of lidar and camera data by combining state-of-the art detection models in a sequential manner.
arXiv Detail & Related papers (2022-05-03T10:58:41Z) - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in
Unstructured Driving Environments [54.22535063244038]
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments.
Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians.
arXiv Detail & Related papers (2020-09-22T08:25:44Z) - Learning Invariant Representations for Reinforcement Learning without
Reconstruction [98.33235415273562]
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.
Bisimulation metrics quantify behavioral similarity between states in continuous MDPs.
We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks.
arXiv Detail & Related papers (2020-06-18T17:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.