Low-latency Perception in Off-Road Dynamical Low Visibility Environments
- URL: http://arxiv.org/abs/2012.13014v1
- Date: Wed, 23 Dec 2020 22:54:43 GMT
- Title: Low-latency Perception in Off-Road Dynamical Low Visibility Environments
- Authors: Nelson Alves, Marco Ruiz, Marco Reis, Tiago Cajahyba, Davi Oliveira,
Ana Barreto, Eduardo F. Simas Filho, Wagner L. A. de Oliveira, Leizer
Schnitman, Roberto L. S. Monteiro
- Abstract summary: This work proposes a perception system for autonomous vehicles and advanced driver assistance specialized on unpaved roads and off-road environments.
Almost 12,000 images of different unpaved and off-road environments were collected and labeled.
We have used convolutional neural networks trained to segment obstacles areas where the car can pass through.
- Score: 0.9142067094647588
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work proposes a perception system for autonomous vehicles and advanced
driver assistance specialized on unpaved roads and off-road environments. In
this research, the authors have investigated the behavior of Deep Learning
algorithms applied to semantic segmentation of off-road environments and
unpaved roads under differents adverse conditions of visibility. Almost 12,000
images of different unpaved and off-road environments were collected and
labeled. It was assembled an off-road proving ground exclusively for its
development. The proposed dataset also contains many adverse situations such as
rain, dust, and low light. To develop the system, we have used convolutional
neural networks trained to segment obstacles and areas where the car can pass
through. We developed a Configurable Modular Segmentation Network (CMSNet)
framework to help create different architectures arrangements and test them on
the proposed dataset. Besides, we also have ported some CMSNet configurations
by removing and fusing many layers using TensorRT, C++, and CUDA to achieve
embedded real-time inference and allow field tests. The main contributions of
this work are: a new dataset for unpaved roads and off-roads environments
containing many adverse conditions such as night, rain, and dust; a CMSNet
framework; an investigation regarding the feasibility of applying deep learning
to detect region where the vehicle can pass through when there is no clear
boundary of the track; a study of how our proposed segmentation algorithms
behave in different severity levels of visibility impairment; and an evaluation
of field tests carried out with semantic segmentation architectures ported for
real-time inference.
Related papers
- Off-Road LiDAR Intensity Based Semantic Segmentation [11.684330305297523]
Learning-based LiDAR semantic segmentation utilizes machine learning techniques to automatically classify objects in LiDAR point clouds.
We address this problem by harnessing the LiDAR intensity parameter to enhance object segmentation in off-road environments.
Our approach was evaluated in the RELLIS-3D data set and yielded promising results as a preliminary analysis with improved mIoU for classes "puddle" and "grass"
arXiv Detail & Related papers (2024-01-02T21:27:43Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - OFFSEG: A Semantic Segmentation Framework For Off-Road Driving [6.845371503461449]
We propose a framework for off-road semantic segmentation called as OFFSEG.
Off-road semantic segmentation is challenging due to the presence of uneven terrains, unstructured class boundaries, irregular features and strong textures.
arXiv Detail & Related papers (2021-03-23T09:45:41Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - Fusion of neural networks, for LIDAR-based evidential road mapping [3.065376455397363]
We introduce RoadSeg, a new convolutional architecture that is optimized for road detection in LIDAR scans.
RoadSeg is used to classify individual LIDAR points as either belonging to the road, or not.
We thus secondly present an evidential road mapping algorithm, that fuses consecutive road detection results.
arXiv Detail & Related papers (2021-02-05T18:14:36Z) - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in
Unstructured Driving Environments [54.22535063244038]
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments.
Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians.
arXiv Detail & Related papers (2020-09-22T08:25:44Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.