FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation
- URL: http://arxiv.org/abs/2204.01587v1
- Date: Mon, 4 Apr 2022 15:33:42 GMT
- Title: FIFO: Learning Fog-invariant Features for Foggy Scene Segmentation
- Authors: Sohyun Lee, Taeyoung Son, Suha Kwak
- Abstract summary: We propose a new method for learning semantic segmentation models robust against fog.
Its key idea is to consider the fog condition of an image as its style and close the gap between images with different fog conditions.
Our method substantially outperforms previous work on three real foggy image datasets.
- Score: 14.932318540666548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust visual recognition under adverse weather conditions is of great
importance in real-world applications. In this context, we propose a new method
for learning semantic segmentation models robust against fog. Its key idea is
to consider the fog condition of an image as its style and close the gap
between images with different fog conditions in neural style spaces of a
segmentation model. In particular, since the neural style of an image is in
general affected by other factors as well as fog, we introduce a fog-pass
filter module that learns to extract a fog-relevant factor from the style.
Optimizing the fog-pass filter and the segmentation model alternately gradually
closes the style gap between different fog conditions and allows to learn
fog-invariant features in consequence. Our method substantially outperforms
previous work on three real foggy image datasets. Moreover, it improves
performance on both foggy and clear weather images, while existing methods
often degrade performance on clear scenes.
Related papers
- D2SL: Decouple Defogging and Semantic Learning for Foggy Domain-Adaptive Segmentation [0.8261182037130406]
We propose a novel training framework, Decouple Defogging and Semantic learning, called D2SL.
We introduce a domain-consistent transfer strategy to establish a connection between defogging and segmentation tasks.
We design a real fog transfer strategy to improve defogging effects by fully leveraging the fog priors from real foggy images.
arXiv Detail & Related papers (2024-04-07T04:55:58Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - Counting Crowds in Bad Weather [68.50690406143173]
We propose a method for robust crowd counting in adverse weather scenarios.
Our model learns effective features and adaptive queries to account for large appearance variations.
Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets.
arXiv Detail & Related papers (2023-06-02T00:00:09Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Structure Representation Network and Uncertainty Feedback Learning for
Dense Non-Uniform Fog Removal [64.77435210892041]
We introduce a structure-representation network with uncertainty feedback learning.
Specifically, we extract the feature representations from a pre-trained Vision Transformer (DINO-ViT) module to recover the background information.
To handle the intractability of estimating the atmospheric light colors, we exploit the grayscale version of our input image.
arXiv Detail & Related papers (2022-10-06T17:10:57Z) - Cloud removal Using Atmosphere Model [7.259230333873744]
Cloud removal is an essential task in remote sensing data analysis.
We propose to use scattering model for temporal sequence of images of any scene in the framework of low rank and sparse models.
We develop a semi-realistic simulation method to produce cloud cover so that various methods can be quantitatively analysed.
arXiv Detail & Related papers (2022-10-05T01:29:19Z) - Leveraging Scale-Invariance and Uncertainity with Self-Supervised Domain
Adaptation for Semantic Segmentation of Foggy Scenes [4.033107207078282]
FogAdapt is a novel approach for domain adaptation of semantic segmentation for dense foggy scenes.
FogAdapt significantly outperforms the current state-of-the-art in semantic segmentation of foggy images.
arXiv Detail & Related papers (2022-01-07T18:29:58Z) - Both Style and Fog Matter: Cumulative Domain Adaptation for Semantic
Foggy Scene Understanding [63.99301797430936]
We propose a new pipeline to cumulatively adapt style, fog and the dual-factor (style and fog)
Specifically, we devise a unified framework to disentangle the style factor and the fog factor separately, and then the dual-factor from images in different domains.
Our method achieves the state-of-the-art performance on three benchmarks and shows generalization ability in rainy and snowy scenes.
arXiv Detail & Related papers (2021-12-01T13:21:20Z) - Multi-Model Learning for Real-Time Automotive Semantic Foggy Scene
Understanding via Domain Adaptation [17.530091734327296]
We propose an efficient end-to-end automotive semantic scene understanding approach that is robust to foggy weather conditions.
Our approach incorporates RGB colour, depth and luminance images via distinct encoders with dense connectivity.
Our model achieves comparable performance to contemporary approaches at a fraction of the overall model complexity.
arXiv Detail & Related papers (2020-12-09T21:04:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.