Enhancing Autonomous Vehicle Perception in Adverse Weather through Image Augmentation during Semantic Segmentation Training
- URL: http://arxiv.org/abs/2408.07239v1
- Date: Wed, 14 Aug 2024 00:08:28 GMT
- Title: Enhancing Autonomous Vehicle Perception in Adverse Weather through Image Augmentation during Semantic Segmentation Training
- Authors: Ethan Kou, Noah Curran,
- Abstract summary: We trained encoder-decoder UNet models to perform semantic segmentation augmentations.
Models trained on weather data have significantly lower losses than those trained on augmented data in all conditions except for clear days.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Robust perception is crucial in autonomous vehicle navigation and localization. Visual processing tasks, like semantic segmentation, should work in varying weather conditions and during different times of day. Semantic segmentation is where each pixel is assigned a class, which is useful for locating overall features (1). Training a segmentation model requires large amounts of data, and the labeling process for segmentation data is especially tedious. Additionally, many large datasets include only images taken in clear weather. This is a problem because training a model exclusively on clear weather data hinders performance in adverse weather conditions like fog or rain. We hypothesize that given a dataset of only clear days images, applying image augmentation (such as random rain, fog, and brightness) during training allows for domain adaptation to diverse weather conditions. We used CARLA, a 3D realistic autonomous vehicle simulator, to collect 1200 images in clear weather composed of 29 classes from 10 different towns (2). We also collected 1200 images of random weather effects. We trained encoder-decoder UNet models to perform semantic segmentation. Applying augmentations significantly improved segmentation under weathered night conditions (p < 0.001). However, models trained on weather data have significantly lower losses than those trained on augmented data in all conditions except for clear days. This shows there is room for improvement in the domain adaptation approach. Future work should test more types of augmentations and also use real-life images instead of CARLA. Ideally, the augmented model meets or exceeds the performance of the weather model.
Related papers
- WeatherProof: Leveraging Language Guidance for Semantic Segmentation in Adverse Weather [8.902960772665482]
We propose a method to infer semantic segmentation maps from images captured under adverse weather conditions.
We begin by examining existing models on images degraded by weather conditions such as rain, fog, or snow.
We propose WeatherProof, the first semantic segmentation dataset with accurate clear and adverse weather image pairs.
arXiv Detail & Related papers (2024-03-21T22:46:27Z) - WeatherProof: A Paired-Dataset Approach to Semantic Segmentation in
Adverse Weather [9.619700283574533]
We introduce a general paired-training method that leads to improved performance on images in adverse weather conditions.
We create the first semantic segmentation dataset with accurate clear and adverse weather image pairs.
We find that training on these paired clear and adverse weather frames which share an underlying scene results in improved performance on adverse weather data.
arXiv Detail & Related papers (2023-12-15T04:57:54Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - Counting Crowds in Bad Weather [68.50690406143173]
We propose a method for robust crowd counting in adverse weather scenarios.
Our model learns effective features and adaptive queries to account for large appearance variations.
Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets.
arXiv Detail & Related papers (2023-06-02T00:00:09Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Efficient Semantic Segmentation on Edge Devices [7.5562201794440185]
This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events.
We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings.
Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads.
arXiv Detail & Related papers (2022-12-28T04:13:11Z) - Synthetic Data for Object Classification in Industrial Applications [53.180678723280145]
In object classification, capturing a large number of images per object and in different conditions is not always possible.
This work explores the creation of artificial images using a game engine to cope with limited data in the training dataset.
arXiv Detail & Related papers (2022-12-09T11:43:04Z) - An Efficient Domain-Incremental Learning Approach to Drive in All
Weather Conditions [8.436505917796174]
Deep neural networks enable impressive visual perception performance for autonomous driving.
They are prone to forgetting previously learned information when adapting to different weather conditions.
We propose DISC -- Domain Incremental through Statistical Correction -- a simple zero-forgetting approach which can incrementally learn new tasks.
arXiv Detail & Related papers (2022-04-19T11:39:20Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Weather and Light Level Classification for Autonomous Driving: Dataset,
Baseline and Active Learning [0.6445605125467573]
We build a new dataset for weather (fog, rain, and snow) classification and light level (bright, moderate, and low) classification.
Each image has three labels corresponding to weather, light level, and street type.
We implement an active learning framework to reduce the dataset's redundancy and find the optimal set of frames for training a model.
arXiv Detail & Related papers (2021-04-28T22:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.