Adverse Weather Conditions Augmentation of LiDAR Scenes with Latent Diffusion Models
- URL: http://arxiv.org/abs/2501.01761v1
- Date: Fri, 03 Jan 2025 11:26:29 GMT
- Title: Adverse Weather Conditions Augmentation of LiDAR Scenes with Latent Diffusion Models
- Authors: Andrea Matteazzi, Pascal Colling, Michael Arnold, Dietmar Tutsch,
- Abstract summary: We propose a latent diffusion process constituted by autoencoder and latent diffusion models.
We leverage the clear condition LiDAR scenes with a postprocessing step to improve the realism of the generated adverse weather condition scenes.
- Score: 0.0
- License:
- Abstract: LiDAR scenes constitute a fundamental source for several autonomous driving applications. Despite the existence of several datasets, scenes from adverse weather conditions are rarely available. This limits the robustness of downstream machine learning models, and restrains the reliability of autonomous driving systems in particular locations and seasons. Collecting feature-diverse scenes under adverse weather conditions is challenging due to seasonal limitations. Generative models are therefore essentials, especially for generating adverse weather conditions for specific driving scenarios. In our work, we propose a latent diffusion process constituted by autoencoder and latent diffusion models. Moreover, we leverage the clear condition LiDAR scenes with a postprocessing step to improve the realism of the generated adverse weather condition scenes.
Related papers
- Multiple weather images restoration using the task transformer and adaptive mixup strategy [14.986500375481546]
We introduce a novel multi-task severe weather removal model that can effectively handle complex weather conditions in an adaptive manner.
Our model incorporates a weather task sequence generator, enabling the self-attention mechanism to selectively focus on features specific to different weather types.
Our proposed model has achieved state-of-the-art performance on the publicly available dataset.
arXiv Detail & Related papers (2024-09-05T04:55:40Z) - PLT-D3: A High-fidelity Dynamic Driving Simulation Dataset for Stereo Depth and Scene Flow [0.0]
This paper introduces Dynamic-weather Driving dataset; a high-fidelity stereo depth and scene flow ground truth data generated using Engine 5.
In particular, this dataset includes synchronized high-resolution stereo image sequences that replicate a wide array of dynamic weather scenarios.
Benchmarks have been established for several critical autonomous driving tasks using Unreal-D3 to measure and enhance the performance of state-of-the-art models.
arXiv Detail & Related papers (2024-06-11T19:21:46Z) - Real-Time Environment Condition Classification for Autonomous Vehicles [3.8514288339458718]
We train a deep learning model to identify outdoor weather and dangerous road conditions.
We achieve this by introducing an improved taxonomy and label hierarchy for a state-of-the-art adverse-weather dataset.
We train RECNet, a deep learning model for the classification of environment conditions from a single RGB frame.
arXiv Detail & Related papers (2024-05-29T17:29:55Z) - GenAD: Generalized Predictive Model for Autonomous Driving [75.39517472462089]
We introduce the first large-scale video prediction model in the autonomous driving discipline.
Our model, dubbed GenAD, handles the challenging dynamics in driving scenes with novel temporal reasoning blocks.
It can be adapted into an action-conditioned prediction model or a motion planner, holding great potential for real-world driving applications.
arXiv Detail & Related papers (2024-03-14T17:58:33Z) - Instructed Diffuser with Temporal Condition Guidance for Offline
Reinforcement Learning [71.24316734338501]
We propose an effective temporally-conditional diffusion model coined Temporally-Composable diffuser (TCD)
TCD extracts temporal information from interaction sequences and explicitly guides generation with temporal conditions.
Our method reaches or matches the best performance compared with prior SOTA baselines.
arXiv Detail & Related papers (2023-06-08T02:12:26Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - ZeroScatter: Domain Transfer for Long Distance Imaging and Vision
through Scattering Media [26.401067775059154]
We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes.
We assess the proposed method using real-world captures, and the proposed method outperforms existing monocular de-scattering approaches by 2.8 dB PSNR on controlled fog chamber measurements.
arXiv Detail & Related papers (2021-02-11T04:41:17Z) - Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions [0.0]
We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
arXiv Detail & Related papers (2020-10-28T12:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.