Self-Supervised Drivable Area and Road Anomaly Segmentation using RGB-D
Data for Robotic Wheelchairs
- URL: http://arxiv.org/abs/2007.05950v1
- Date: Sun, 12 Jul 2020 10:12:46 GMT
- Title: Self-Supervised Drivable Area and Road Anomaly Segmentation using RGB-D
Data for Robotic Wheelchairs
- Authors: Hengli Wang, Yuxiang Sun, Ming Liu
- Abstract summary: We develop a pipeline that can automatically generate segmentation labels for drivable areas and road anomalies.
Our proposed automatic labeling pipeline achieves an impressive speed-up compared to manual labeling.
Our proposed self-supervised approach exhibits more robust and accurate results than the state-of-the-art traditional algorithms.
- Score: 26.110522390201094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The segmentation of drivable areas and road anomalies are critical
capabilities to achieve autonomous navigation for robotic wheelchairs. The
recent progress of semantic segmentation using deep learning techniques has
presented effective results. However, the acquisition of large-scale datasets
with hand-labeled ground truth is time-consuming and labor-intensive, making
the deep learning-based methods often hard to implement in practice. We
contribute to the solution of this problem for the task of drivable area and
road anomaly segmentation by proposing a self-supervised learning approach. We
develop a pipeline that can automatically generate segmentation labels for
drivable areas and road anomalies. Then, we train RGB-D data-based semantic
segmentation neural networks and get predicted labels. Experimental results
show that our proposed automatic labeling pipeline achieves an impressive
speed-up compared to manual labeling. In addition, our proposed self-supervised
approach exhibits more robust and accurate results than the state-of-the-art
traditional algorithms as well as the state-of-the-art self-supervised
algorithms.
Related papers
- Surrogate Modeling of Trajectory Map-matching in Urban Road Networks using Transformer Sequence-to-Sequence Model [1.3812010983144802]
This paper introduces a deep-learning model, specifically the transformer-based encoder-decoder model, to perform as a surrogate for offline map-matching algorithms.
The model is trained and evaluated using GPS traces collected in Manhattan, New York.
arXiv Detail & Related papers (2024-04-18T18:39:23Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Leveraging Road Area Semantic Segmentation with Auxiliary Steering Task [0.0]
We propose a CNN-based method that can leverage the steering wheel angle information to improve the road area semantic segmentation.
We demonstrate the effectiveness of the proposed approach on two challenging data sets for autonomous driving.
arXiv Detail & Related papers (2022-12-19T13:25:09Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - A System-driven Automatic Ground Truth Generation Method for DL
Inner-City Driving Corridor Detectors [0.0]
We propose an automatic labeling approach for semantic segmentation of the drivable ego corridor.
The proposed holistic approach could be used in an automated data loop, allowing a continuous improvement of the depending perception modules.
arXiv Detail & Related papers (2022-07-20T12:55:16Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Large Scale Autonomous Driving Scenarios Clustering with Self-supervised
Feature Extraction [6.804209932400134]
This article proposes a comprehensive data clustering framework for a large set of vehicle driving data.
Our approach thoroughly considers the traffic elements, including both in-traffic agent objects and map information.
With the newly designed driving data clustering evaluation metrics based on data-augmentation, the accuracy assessment does not require a human-labeled data-set.
arXiv Detail & Related papers (2021-03-30T06:22:40Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z) - Real-time Fusion Network for RGB-D Semantic Segmentation Incorporating
Unexpected Obstacle Detection for Road-driving Images [13.3382165879322]
We propose a real-time fusion semantic segmentation network termed RFNet.
RFNet is capable of running swiftly, which satisfies autonomous vehicles applications.
On Cityscapes, our method outperforms previous state-of-the-art semantic segmenters, with excellent accuracy and 22Hz inference speed.
arXiv Detail & Related papers (2020-02-24T22:17:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.