Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal
- URL: http://arxiv.org/abs/2403.07684v1
- Date: Tue, 12 Mar 2024 14:21:30 GMT
- Title: Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal
- Authors: Yijun Yang, Hongtao Wu, Angelica I. Aviles-Rivero, Yulun Zhang, Jing
Qin, Lei Zhu
- Abstract summary: We introduce test-time adaptation into adverse weather removal in videos.
We propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process.
- Score: 53.15046196592023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world vision tasks frequently suffer from the appearance of unexpected
adverse weather conditions, including rain, haze, snow, and raindrops. In the
last decade, convolutional neural networks and vision transformers have yielded
outstanding results in single-weather video removal. However, due to the
absence of appropriate adaptation, most of them fail to generalize to other
weather conditions. Although ViWS-Net is proposed to remove adverse weather
conditions in videos with a single set of pre-trained weights, it is seriously
blinded by seen weather at train-time and degenerates when coming to unseen
weather during test-time. In this work, we introduce test-time adaptation into
adverse weather removal in videos, and propose the first framework that
integrates test-time adaptation into the iterative diffusion reverse process.
Specifically, we devise a diffusion-based network with a novel temporal noise
model to efficiently explore frame-correlated information in degraded video
clips at training stage. During inference stage, we introduce a proxy task
named Diffusion Tubelet Self-Calibration to learn the primer distribution of
test video stream and optimize the model by approximating the temporal noise
model for online adaptation. Experimental results, on benchmark datasets,
demonstrate that our Test-Time Adaptation method with Diffusion-based
network(Diff-TTA) outperforms state-of-the-art methods in terms of restoring
videos degraded by seen weather conditions. Its generalizable capability is
also validated with unseen weather conditions in both synthesized and
real-world videos.
Related papers
- Semi-Supervised Video Desnowing Network via Temporal Decoupling Experts and Distribution-Driven Contrastive Regularization [21.22179604024444]
We present a new paradigm for video desnowing in a semi-supervised spirit to involve unlabeled real data for the generalizable snow removal.
Specifically, we construct a real-world dataset with 85 snowy videos, and then present a Semi-supervised Video Desnowing Network (SemiVDN) equipped by a novel Distribution-driven Contrastive Regularization.
The elaborated contrastive regularizations mitigate the distribution gap between the synthetic and real data, and consequently maintains the desired snow-invariant background details.
arXiv Detail & Related papers (2024-10-10T13:31:42Z) - WeatherProof: A Paired-Dataset Approach to Semantic Segmentation in
Adverse Weather [9.619700283574533]
We introduce a general paired-training method that leads to improved performance on images in adverse weather conditions.
We create the first semantic segmentation dataset with accurate clear and adverse weather image pairs.
We find that training on these paired clear and adverse weather frames which share an underlying scene results in improved performance on adverse weather data.
arXiv Detail & Related papers (2023-12-15T04:57:54Z) - Video Adverse-Weather-Component Suppression Network via Weather
Messenger and Adversarial Backpropagation [45.184188689391775]
We propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net)
Our ViWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition.
arXiv Detail & Related papers (2023-09-24T17:13:55Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - Sit Back and Relax: Learning to Drive Incrementally in All Weather
Conditions [16.014293219912]
In autonomous driving scenarios, current object detection models show strong performance when tested in clear weather.
We propose Domain-Incremental Learning through Activation Matching (DILAM) to adapt only the affine parameters of a clear weather pre-trained network to different weather conditions.
Our memory bank is extremely lightweight, since affine parameters account for less than 2% of a typical object detector.
arXiv Detail & Related papers (2023-05-30T11:37:41Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Mutual-GAN: Towards Unsupervised Cross-Weather Adaptation with Mutual
Information Constraint [32.67453558911618]
Convolutional neural network (CNN) have proven its success for semantic segmentation, which is a core task of emerging industrial applications such as autonomous driving.
In practical applications, the outdoor weather and illumination are changeable, e.g., cloudy and nighttime, which results in a significant drop of semantic segmentation accuracy of CNN only trained with daytime data.
We propose a novel generative adversarial network (namely Mutual-GAN) to alleviate the accuracy decline when daytime-trained neural network is applied to videos captured under adverse weather conditions.
arXiv Detail & Related papers (2021-06-30T11:44:22Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.