R2S100K: Road-Region Segmentation Dataset For Semi-Supervised Autonomous
Driving in the Wild
- URL: http://arxiv.org/abs/2308.06393v1
- Date: Fri, 11 Aug 2023 21:31:37 GMT
- Title: R2S100K: Road-Region Segmentation Dataset For Semi-Supervised Autonomous
Driving in the Wild
- Authors: Muhammad Atif Butt, Hassan Ali, Adnan Qayyum, Waqas Sultani, Ala
Al-Fuqaha, Junaid Qadir
- Abstract summary: Road Region dataset (R2S100K) is a large-scale dataset and benchmark for training and evaluation of road segmentation.
R2S100K comprises 100K images extracted from a large and diverse set of video sequences covering more than 1000 KM of roadways.
We present an Efficient Data Sampling method (EDS) based self-training framework to improve learning by leveraging unlabeled data.
- Score: 11.149480965148015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic understanding of roadways is a key enabling factor for safe
autonomous driving. However, existing autonomous driving datasets provide
well-structured urban roads while ignoring unstructured roadways containing
distress, potholes, water puddles, and various kinds of road patches i.e.,
earthen, gravel etc. To this end, we introduce Road Region Segmentation dataset
(R2S100K) -- a large-scale dataset and benchmark for training and evaluation of
road segmentation in aforementioned challenging unstructured roadways. R2S100K
comprises 100K images extracted from a large and diverse set of video sequences
covering more than 1000 KM of roadways. Out of these 100K privacy respecting
images, 14,000 images have fine pixel-labeling of road regions, with 86,000
unlabeled images that can be leveraged through semi-supervised learning
methods. Alongside, we present an Efficient Data Sampling (EDS) based
self-training framework to improve learning by leveraging unlabeled data. Our
experimental results demonstrate that the proposed method significantly
improves learning methods in generalizability and reduces the labeling cost for
semantic segmentation tasks. Our benchmark will be publicly available to
facilitate future research at https://r2s100k.github.io/.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Probabilistic road classification in historical maps using synthetic data and deep learning [3.3755652248305004]
We introduce a novel framework that integrates deep learning with geoinformation, computer-based painting, and image processing methodologies.
This framework enables the extraction and classification of roads from historical maps using only road geometries.
Our method achieved completeness and correctness scores of over 94% and 92%, respectively, for road class 2.
arXiv Detail & Related papers (2024-10-03T06:43:09Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - Leveraging Road Area Semantic Segmentation with Auxiliary Steering Task [0.0]
We propose a CNN-based method that can leverage the steering wheel angle information to improve the road area semantic segmentation.
We demonstrate the effectiveness of the proposed approach on two challenging data sets for autonomous driving.
arXiv Detail & Related papers (2022-12-19T13:25:09Z) - SPIN Road Mapper: Extracting Roads from Aerial Images via Spatial and
Interaction Space Graph Reasoning for Autonomous Driving [64.10636296274168]
Road extraction is an essential step in building autonomous navigation systems.
Using just convolution neural networks (ConvNets) for this problem is not effective as it is inefficient at capturing distant dependencies between road segments in the image.
We propose a Spatial and Interaction Space Graph Reasoning (SPIN) module which when plugged into a ConvNet performs reasoning over graphs constructed on spatial and interaction spaces projected from the feature maps.
arXiv Detail & Related papers (2021-09-16T03:52:17Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Fusion of neural networks, for LIDAR-based evidential road mapping [3.065376455397363]
We introduce RoadSeg, a new convolutional architecture that is optimized for road detection in LIDAR scans.
RoadSeg is used to classify individual LIDAR points as either belonging to the road, or not.
We thus secondly present an evidential road mapping algorithm, that fuses consecutive road detection results.
arXiv Detail & Related papers (2021-02-05T18:14:36Z) - Convolutional Recurrent Network for Road Boundary Extraction [99.55522995570063]
We tackle the problem of drivable road boundary extraction from LiDAR and camera imagery.
We design a structured model where a fully convolutional network obtains deep features encoding the location and direction of road boundaries.
We showcase the effectiveness of our method on a large North American city where we obtain perfect topology of road boundaries 99.3% of the time.
arXiv Detail & Related papers (2020-12-21T18:59:12Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Scribble-based Weakly Supervised Deep Learning for Road Surface
Extraction from Remote Sensing Images [7.1577508803778045]
We propose a scribble-based weakly supervised road surface extraction method named ScRoadExtractor.
To propagate semantic information from sparse scribbles to unlabeled pixels, we introduce a road label propagation algorithm.
The proposal masks generated from the road label propagation algorithm are utilized to train a dual-branch encoder-decoder network.
arXiv Detail & Related papers (2020-10-25T12:40:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.