OFFSEG: A Semantic Segmentation Framework For Off-Road Driving
- URL: http://arxiv.org/abs/2103.12417v1
- Date: Tue, 23 Mar 2021 09:45:41 GMT
- Title: OFFSEG: A Semantic Segmentation Framework For Off-Road Driving
- Authors: Kasi Viswanath, Kartikeya Singh, Peng Jiang, Sujit P.B. and Srikanth
Saripalli
- Abstract summary: We propose a framework for off-road semantic segmentation called as OFFSEG.
Off-road semantic segmentation is challenging due to the presence of uneven terrains, unstructured class boundaries, irregular features and strong textures.
- Score: 6.845371503461449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Off-road image semantic segmentation is challenging due to the presence of
uneven terrains, unstructured class boundaries, irregular features and strong
textures. These aspects affect the perception of the vehicle from which the
information is used for path planning. Current off-road datasets exhibit
difficulties like class imbalance and understanding of varying environmental
topography. To overcome these issues we propose a framework for off-road
semantic segmentation called as OFFSEG that involves (i) a pooled class
semantic segmentation with four classes (sky, traversable region,
non-traversable region and obstacle) using state-of-the-art deep learning
architectures (ii) a colour segmentation methodology to segment out specific
sub-classes (grass, puddle, dirt, gravel, etc.) from the traversable region for
better scene understanding. The evaluation of the framework is carried out on
two off-road driving datasets, namely, RELLIS-3D and RUGD. We have also tested
proposed framework in IISERB campus frames. The results show that OFFSEG
achieves good performance and also provides detailed information on the
traversable region.
Related papers
- Leveraging Topology for Domain Adaptive Road Segmentation in Satellite
and Aerial Imagery [9.23555285827483]
Road segmentation algorithms fail to generalize to new geographical locations.
Road skeleton is an auxiliary task to impose the topological constraints.
For self-training, we filter out the noisy pseudo-labels by using a connectivity-based pseudo-labels refinement strategy.
arXiv Detail & Related papers (2023-09-27T12:50:51Z) - Towards Deeply Unified Depth-aware Panoptic Segmentation with
Bi-directional Guidance Learning [63.63516124646916]
We propose a deeply unified framework for depth-aware panoptic segmentation.
We propose a bi-directional guidance learning approach to facilitate cross-task feature learning.
Our method sets the new state of the art for depth-aware panoptic segmentation on both Cityscapes-DVPS and SemKITTI-DVPS datasets.
arXiv Detail & Related papers (2023-07-27T11:28:33Z) - Rethinking Range View Representation for LiDAR Segmentation [66.73116059734788]
"Many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections.
We present RangeFormer, a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing.
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-09T16:13:27Z) - An Active and Contrastive Learning Framework for Fine-Grained Off-Road
Semantic Segmentation [7.035838394813961]
Off-road semantic segmentation with fine-grained labels is necessary for autonomous vehicles to understand driving scenes.
Fine-grained semantic segmentation in off-road scenes usually has no unified category definition due to ambiguous nature environments.
This research proposes an active and contrastive learning-based method that does not rely on pixel-wise labels.
arXiv Detail & Related papers (2022-02-18T03:16:31Z) - Visual Boundary Knowledge Translation for Foreground Segmentation [57.32522585756404]
We make an attempt towards building models that explicitly account for visual boundary knowledge, in hope to reduce the training effort on segmenting unseen categories.
With only tens of labeled samples as guidance, Trans-Net achieves close results on par with fully supervised methods.
arXiv Detail & Related papers (2021-08-01T07:10:25Z) - Transformer Meets Convolution: A Bilateral Awareness Net-work for
Semantic Segmentation of Very Fine Resolution Ur-ban Scene Images [6.460167724233707]
We propose a bilateral awareness network (BANet) which contains a dependency path and a texture path.
BANet captures the long-range relationships and fine-grained details in VFR images.
Experiments conducted on the three large-scale urban scene image segmentation datasets, i.e., ISPRS Vaihingen dataset, ISPRS Potsdam dataset, and UAVid dataset, demonstrate the effective-ness of BANet.
arXiv Detail & Related papers (2021-06-23T13:57:36Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - Fine-Grained Off-Road Semantic Segmentation and Mapping via Contrastive
Learning [7.965964259208489]
Road detection or traversability analysis has been a key technique for a mobile robot to traverse complex off-road scenes.
understanding scenes with fine-grained labels are needed for off-road robots, as scenes are very diverse.
This research proposes a contrastive learning based method to achieve meaningful scene understanding for a robot to traverse off-road.
arXiv Detail & Related papers (2021-03-05T13:23:24Z) - Low-latency Perception in Off-Road Dynamical Low Visibility Environments [0.9142067094647588]
This work proposes a perception system for autonomous vehicles and advanced driver assistance specialized on unpaved roads and off-road environments.
Almost 12,000 images of different unpaved and off-road environments were collected and labeled.
We have used convolutional neural networks trained to segment obstacles areas where the car can pass through.
arXiv Detail & Related papers (2020-12-23T22:54:43Z) - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in
Unstructured Driving Environments [54.22535063244038]
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments.
Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians.
arXiv Detail & Related papers (2020-09-22T08:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.