Refign: Align and Refine for Adaptation of Semantic Segmentation to
Adverse Conditions
- URL: http://arxiv.org/abs/2207.06825v3
- Date: Mon, 3 Jul 2023 19:10:55 GMT
- Title: Refign: Align and Refine for Adaptation of Semantic Segmentation to
Adverse Conditions
- Authors: David Bruggemann, Christos Sakaridis, Prune Truong, Luc Van Gool
- Abstract summary: Refign is a generic extension to self-training-based UDA methods which leverages cross-domain correspondences.
Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism.
The approach introduces no extra training parameters, minimal computational overhead -- during training only -- and can be used as a drop-in extension to improve any given self-training-based UDA method.
- Score: 78.71745819446176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the scarcity of dense pixel-level semantic annotations for images
recorded in adverse visual conditions, there has been a keen interest in
unsupervised domain adaptation (UDA) for the semantic segmentation of such
images. UDA adapts models trained on normal conditions to the target
adverse-condition domains. Meanwhile, multiple datasets with driving scenes
provide corresponding images of the same scenes across multiple conditions,
which can serve as a form of weak supervision for domain adaptation. We propose
Refign, a generic extension to self-training-based UDA methods which leverages
these cross-domain correspondences. Refign consists of two steps: (1) aligning
the normal-condition image to the corresponding adverse-condition image using
an uncertainty-aware dense matching network, and (2) refining the adverse
prediction with the normal prediction using an adaptive label correction
mechanism. We design custom modules to streamline both steps and set the new
state of the art for domain-adaptive semantic segmentation on several
adverse-condition benchmarks, including ACDC and Dark Zurich. The approach
introduces no extra training parameters, minimal computational overhead --
during training only -- and can be used as a drop-in extension to improve any
given self-training-based UDA method. Code is available at
https://github.com/brdav/refign.
Related papers
- Semi-supervised Domain Adaptive Medical Image Segmentation through
Consistency Regularized Disentangled Contrastive Learning [11.049672162852733]
In this work, we investigate relatively less explored semi-supervised domain adaptation (SSDA) for medical image segmentation.
We propose a two-stage training process: first, an encoder is pre-trained in a self-learning paradigm using a novel domain-content disentangled contrastive learning (CL) along with a pixel-level feature consistency constraint.
We experimentally validate and validate our proposed method can easily be extended for UDA settings, adding to the superiority of the proposed strategy.
arXiv Detail & Related papers (2023-07-06T06:13:22Z) - Condition-Invariant Semantic Segmentation [77.10045325743644]
We implement Condition-Invariant Semantic (CISS) on the current state-of-the-art domain adaptation architecture.
Our method achieves the second-best performance on the normal-to-adverse Cityscapes$to$ACDC benchmark.
CISS is shown to generalize well to domains unseen during training, such as BDD100K-night and ACDC-night.
arXiv Detail & Related papers (2023-05-27T03:05:07Z) - Regularizing Self-training for Unsupervised Domain Adaptation via
Structural Constraints [14.593782939242121]
We propose to incorporate structural cues from auxiliary modalities, such as depth, to regularise conventional self-training objectives.
Specifically, we introduce a contrastive pixel-level objectness constraint that pulls the pixel representations within a region of an object instance closer.
We show that our regularizer significantly improves top performing self-training methods in various UDA benchmarks for semantic segmentation.
arXiv Detail & Related papers (2023-04-29T00:12:26Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - I2F: A Unified Image-to-Feature Approach for Domain Adaptive Semantic
Segmentation [55.633859439375044]
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work.
Key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly.
This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation.
arXiv Detail & Related papers (2023-01-03T15:19:48Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Edge-preserving Domain Adaptation for semantic segmentation of Medical
Images [0.0]
Domain adaptation is a technique to address the lack of massive amounts of labeled data in unseen environments.
We propose a model that adapts between domains using cycle-consistent loss while maintaining edge details of the original images.
We demonstrate the effectiveness of our algorithm by comparing it to other approaches on two eye fundus vessels segmentation datasets.
arXiv Detail & Related papers (2021-11-18T18:14:33Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.