Fusion of neural networks, for LIDAR-based evidential road mapping
- URL: http://arxiv.org/abs/2102.03326v1
- Date: Fri, 5 Feb 2021 18:14:36 GMT
- Title: Fusion of neural networks, for LIDAR-based evidential road mapping
- Authors: Edouard Capellier, Franck Davoine, Veronique Cherfaoui, You Li
- Abstract summary: We introduce RoadSeg, a new convolutional architecture that is optimized for road detection in LIDAR scans.
RoadSeg is used to classify individual LIDAR points as either belonging to the road, or not.
We thus secondly present an evidential road mapping algorithm, that fuses consecutive road detection results.
- Score: 3.065376455397363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LIDAR sensors are usually used to provide autonomous vehicles with 3D
representations of their environment. In ideal conditions, geometrical models
could detect the road in LIDAR scans, at the cost of a manual tuning of
numerical constraints, and a lack of flexibility. We instead propose an
evidential pipeline, to accumulate road detection results obtained from neural
networks. First, we introduce RoadSeg, a new convolutional architecture that is
optimized for road detection in LIDAR scans. RoadSeg is used to classify
individual LIDAR points as either belonging to the road, or not. Yet, such
point-level classification results need to be converted into a dense
representation, that can be used by an autonomous vehicle. We thus secondly
present an evidential road mapping algorithm, that fuses consecutive road
detection results. We benefitted from a reinterpretation of logistic
classifiers, which can be seen as generating a collection of simple evidential
mass functions. An evidential grid map that depicts the road can then be
obtained, by projecting the classification results from RoadSeg into grid
cells, and by handling moving objects via conflict analysis. The system was
trained and evaluated on real-life data. A python implementation maintains a 10
Hz framerate. Since road labels were needed for training, a soft labelling
procedure, relying lane-level HD maps, was used to generate coarse training and
validation sets. An additional test set was manually labelled for evaluation
purposes. So as to reach satisfactory results, the system fuses road detection
results obtained from three variants of RoadSeg, processing different LIDAR
features.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Probabilistic road classification in historical maps using synthetic data and deep learning [3.3755652248305004]
We introduce a novel framework that integrates deep learning with geoinformation, computer-based painting, and image processing methodologies.
This framework enables the extraction and classification of roads from historical maps using only road geometries.
Our method achieved completeness and correctness scores of over 94% and 92%, respectively, for road class 2.
arXiv Detail & Related papers (2024-10-03T06:43:09Z) - LMT-Net: Lane Model Transformer Network for Automated HD Mapping from Sparse Vehicle Observations [11.395749549636868]
Lane Model Transformer Network (LMT-Net) is an encoder-decoder neural network architecture that performs polyline encoding and predicts lane pairs and their connectivity.
We evaluate the performance of LMT-Net on an internal dataset that consists of multiple vehicle observations as well as human annotations as Ground Truth (GT)
arXiv Detail & Related papers (2024-09-19T02:14:35Z) - Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Decoupling the Curve Modeling and Pavement Regression for Lane Detection [67.22629246312283]
curve-based lane representation is a popular approach in many lane detection methods.
We propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression.
arXiv Detail & Related papers (2023-09-19T11:24:14Z) - Haul Road Mapping from GPS Traces [0.0]
This paper investigates the possibility of automatically deriving an accurate representation of the road network using GPS data available from haul trucks operating on site.
Based on shortcomings seen in all tested algorithms, a post-processing step is developed which geometrically analyses the created road map for artefacts typical of free-drive areas on mine sites.
arXiv Detail & Related papers (2022-06-27T04:35:06Z) - Laser Data Based Automatic Generation of Lane-Level Road Map for
Intelligent Vehicles [1.3185526395713805]
An automatic lane-level road map generation system is proposed in this paper.
The proposed system has been tested on urban and expressway conditions in Hefei, China.
Our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.
arXiv Detail & Related papers (2020-12-11T01:09:04Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.