FREST: Feature RESToration for Semantic Segmentation under Multiple Adverse Conditions
- URL: http://arxiv.org/abs/2407.13437v1
- Date: Thu, 18 Jul 2024 12:07:02 GMT
- Title: FREST: Feature RESToration for Semantic Segmentation under Multiple Adverse Conditions
- Authors: Sohyun Lee, Namyup Kim, Sungyeon Kim, Suha Kwak,
- Abstract summary: FREST is a novel feature restoration framework for source-free domain adaptation (SFDA) of semantic segmentation to adverse conditions.
FREST achieves a state of the art on two public benchmarks for SFDA to adverse conditions.
It shows superior generalization ability on unseen datasets.
- Score: 35.243694861973715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust semantic segmentation under adverse conditions is crucial in real-world applications. To address this challenging task in practical scenarios where labeled normal condition images are not accessible in training, we propose FREST, a novel feature restoration framework for source-free domain adaptation (SFDA) of semantic segmentation to adverse conditions. FREST alternates two steps: (1) learning the condition embedding space that only separates the condition information from the features and (2) restoring features of adverse condition images on the learned condition embedding space. By alternating these two steps, FREST gradually restores features where the effect of adverse conditions is reduced. FREST achieved a state of the art on two public benchmarks (i.e., ACDC and RobotCar) for SFDA to adverse conditions. Moreover, it shows superior generalization ability on unseen datasets.
Related papers
- Physically Feasible Semantic Segmentation [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion.
Our method, Physically Feasible Semantic (PhyFea), extracts explicit physical constraints that govern spatial class relations.
PhyFea yields significant performance improvements in mIoU over each state-of-the-art network we use.
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - Test-Time Training for Semantic Segmentation with Output Contrastive
Loss [12.535720010867538]
Deep learning-based segmentation models have achieved impressive performance on public benchmarks, but generalizing well to unseen environments remains a major challenge.
This paper introduces Contrastive Loss (OCL), known for its capability to learn robust and generalized representations, to stabilize the adaptation process.
Our method excels even when applied to models initially pre-trained using domain adaptation methods on test domain data, showcasing its resilience and adaptability.
arXiv Detail & Related papers (2023-11-14T03:13:47Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - VBLC: Visibility Boosting and Logit-Constraint Learning for Domain
Adaptive Semantic Segmentation under Adverse Conditions [31.992504022101215]
Generalizing models trained on normal visual conditions to target domains under adverse conditions is demanding in the practical systems.
We propose Visibility Boosting and Logit-Constraint learning (VBLC), tailored for superior normal-to-adverse adaptation.
VBLC explores the potential of getting rid of reference images and resolving the mixture of adverse conditions simultaneously.
arXiv Detail & Related papers (2022-11-22T13:16:41Z) - Refign: Align and Refine for Adaptation of Semantic Segmentation to
Adverse Conditions [78.71745819446176]
Refign is a generic extension to self-training-based UDA methods which leverages cross-domain correspondences.
Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism.
The approach introduces no extra training parameters, minimal computational overhead -- during training only -- and can be used as a drop-in extension to improve any given self-training-based UDA method.
arXiv Detail & Related papers (2022-07-14T11:30:38Z) - Maximum Spatial Perturbation Consistency for Unpaired Image-to-Image
Translation [56.44946660061753]
This paper proposes a universal regularization technique called maximum spatial perturbation consistency (MSPC)
MSPC enforces a spatial perturbation function (T ) and the translation operator (G) to be commutative (i.e., TG = GT )
Our method outperforms the state-of-the-art methods on most I2I benchmarks.
arXiv Detail & Related papers (2022-03-23T19:59:04Z) - Exploiting Negative Learning for Implicit Pseudo Label Rectification in
Source-Free Domain Adaptive Semantic Segmentation [12.716865774780704]
State-of-the-art methods for source free domain adaptation (SFDA) are subject to strict limits.
textitPR-SFDA achieves a performance of 49.0 mIoU, which is very close to that of the state-of-the-art counterparts.
arXiv Detail & Related papers (2021-06-23T02:20:31Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.