Video Adverse-Weather-Component Suppression Network via Weather
Messenger and Adversarial Backpropagation
- URL: http://arxiv.org/abs/2309.13700v1
- Date: Sun, 24 Sep 2023 17:13:55 GMT
- Title: Video Adverse-Weather-Component Suppression Network via Weather
Messenger and Adversarial Backpropagation
- Authors: Yijun Yang, Angelica I. Aviles-Rivero, Huazhu Fu, Ye Liu, Weiming
Wang, Lei Zhu
- Abstract summary: We propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net)
Our ViWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition.
- Score: 45.184188689391775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although convolutional neural networks (CNNs) have been proposed to remove
adverse weather conditions in single images using a single set of pre-trained
weights, they fail to restore weather videos due to the absence of temporal
information. Furthermore, existing methods for removing adverse weather
conditions (e.g., rain, fog, and snow) from videos can only handle one type of
adverse weather. In this work, we propose the first framework for restoring
videos from all adverse weather conditions by developing a video
adverse-weather-component suppression network (ViWS-Net). To achieve this, we
first devise a weather-agnostic video transformer encoder with multiple
transformer stages. Moreover, we design a long short-term temporal modeling
mechanism for weather messenger to early fuse input adjacent video frames and
learn weather-specific information. We further introduce a weather
discriminator with gradient reversion, to maintain the weather-invariant common
information and suppress the weather-specific information in pixel features, by
adversarially predicting weather types. Finally, we develop a messenger-driven
video transformer decoder to retrieve the residual weather-specific feature,
which is spatiotemporally aggregated with hierarchical pixel features and
refined to predict the clean target frame of input videos. Experimental
results, on benchmark datasets and real-world weather videos, demonstrate that
our ViWS-Net outperforms current state-of-the-art methods in terms of restoring
videos degraded by any weather condition.
Related papers
- MWFormer: Multi-Weather Image Restoration Using Degradation-Aware Transformers [44.600209414790854]
Restoring images captured under adverse weather conditions is a fundamental task for many computer vision applications.
We propose a multi-weather Transformer, or MWFormer, that aims to solve multiple weather-induced degradations using a single architecture.
We show that MWFormer achieves significant performance improvements compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-26T08:47:39Z) - Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal [53.15046196592023]
We introduce test-time adaptation into adverse weather removal in videos.
We propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process.
arXiv Detail & Related papers (2024-03-12T14:21:30Z) - Always Clear Days: Degradation Type and Severity Aware All-In-One
Adverse Weather Removal [8.58670633761819]
All-in-one adverse weather removal is an emerging topic on image restoration, which aims to restore multiple weather degradations in an unified model.
We propose a degradation type and severity aware model, called UtilityIR, for blind all-in-one bad weather image restoration.
arXiv Detail & Related papers (2023-10-27T17:29:55Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - RiWNet: A moving object instance segmentation Network being Robust in
adverse Weather conditions [13.272209740926156]
We focus on a new possibility, that is, to improve its resilience to weather interference through the network's structural design.
We propose a novel FPN structure called RiWFPN with a progressive top-down interaction and attention refinement module.
We extend SOLOV2 to capture temporal information in video to learn motion information, and propose a moving object instance segmentation network with RiWFPN called RiWNet.
arXiv Detail & Related papers (2021-09-04T08:55:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.