Dense Road Surface Grip Map Prediction from Multimodal Image Data
- URL: http://arxiv.org/abs/2404.17324v1
- Date: Fri, 26 Apr 2024 11:10:24 GMT
- Title: Dense Road Surface Grip Map Prediction from Multimodal Image Data
- Authors: Jyri Maanpää, Julius Pesonen, Heikki Hyyti, Iaroslav Melekhov, Juho Kannala, Petri Manninen, Antero Kukko, Juha Hyyppä,
- Abstract summary: We propose a method to predict a dense grip map from the area in front of the car, based on postprocessed multimodal sensor data.
We trained a convolutional neural network to predict pixelwise grip values from fused RGB camera, thermal camera, and LiDAR reflectance images.
- Score: 7.584488044752369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Slippery road weather conditions are prevalent in many regions and cause a regular risk for traffic. Still, there has been less research on how autonomous vehicles could detect slippery driving conditions on the road to drive safely. In this work, we propose a method to predict a dense grip map from the area in front of the car, based on postprocessed multimodal sensor data. We trained a convolutional neural network to predict pixelwise grip values from fused RGB camera, thermal camera, and LiDAR reflectance images, based on weakly supervised ground truth from an optical road weather sensor. The experiments show that it is possible to predict dense grip values with good accuracy from the used data modalities as the produced grip map follows both ground truth measurements and local weather conditions, such as snowy areas on the road. The model using only the RGB camera or LiDAR reflectance modality provided good baseline results for grip prediction accuracy while using models fusing the RGB camera, thermal camera, and LiDAR modalities improved the grip predictions significantly.
Related papers
- Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Lightweight Regression Model with Prediction Interval Estimation for Computer Vision-based Winter Road Surface Condition Monitoring [0.4972323953932129]
This paper proposes a deep learning regression model, SIWNet, capable of estimating road surface friction properties from camera images.
SIWNet extends state of the art by including an uncertainty estimation mechanism in the architecture.
The model was trained and tested with the SeeingThroughFog dataset, which features corresponding road friction sensor readings and images from an instrumented vehicle.
arXiv Detail & Related papers (2023-10-02T06:33:06Z) - LOPR: Latent Occupancy PRediction using Generative Models [49.15687400958916]
LiDAR generated occupancy grid maps (L-OGMs) offer a robust bird's eye-view scene representation.
We propose a framework that decouples occupancy prediction into: representation learning and prediction within the learned latent space.
arXiv Detail & Related papers (2022-10-03T22:04:00Z) - LiDAR Snowfall Simulation for Robust 3D Object Detection [116.10039516404743]
We propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds.
Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam.
We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall.
arXiv Detail & Related papers (2022-03-28T21:48:26Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - All-Weather Object Recognition Using Radar and Infrared Sensing [1.7513645771137178]
This thesis explores new sensing developments based on long wave polarised infrared (IR) imagery and imaging radar to recognise objects.
First, we developed a methodology based on Stokes parameters using polarised infrared data to recognise vehicles using deep neural networks.
Second, we explored the potential of using only the power spectrum captured by low-THz radar sensors to perform object recognition in a controlled scenario.
Last, we created a new large-scale dataset in the "wild" with many different weather scenarios showing radar robustness to detect vehicles in adverse weather.
arXiv Detail & Related papers (2020-10-30T14:16:39Z) - Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions [0.0]
We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
arXiv Detail & Related papers (2020-10-28T12:38:41Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.