Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation
- URL: http://arxiv.org/abs/2303.05194v3
- Date: Thu, 17 Aug 2023 12:24:32 GMT
- Title: Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation
- Authors: David Bruggemann, Christos Sakaridis, Tim Br\"odermann, Luc Van Gool
- Abstract summary: We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
- Score: 58.17907376475596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard unsupervised domain adaptation methods adapt models from a source to
a target domain using labeled source data and unlabeled target data jointly. In
model adaptation, on the other hand, access to the labeled source data is
prohibited, i.e., only the source-trained model and unlabeled target data are
available. We investigate normal-to-adverse condition model adaptation for
semantic segmentation, whereby image-level correspondences are available in the
target domain. The target set consists of unlabeled pairs of adverse- and
normal-condition street images taken at GPS-matched locations. Our method --
CMA -- leverages such image pairs to learn condition-invariant features via
contrastive learning. In particular, CMA encourages features in the embedding
space to be grouped according to their condition-invariant semantic content and
not according to the condition under which respective inputs are captured. To
obtain accurate cross-domain semantic correspondences, we warp the normal image
to the viewpoint of the adverse image and leverage warp-confidence scores to
create robust, aggregated features. With this approach, we achieve
state-of-the-art semantic segmentation performance for model adaptation on
several normal-to-adverse adaptation benchmarks, such as ACDC and Dark Zurich.
We also evaluate CMA on a newly procured adverse-condition generalization
benchmark and report favorable results compared to standard unsupervised domain
adaptation methods, despite the comparative handicap of CMA due to source data
inaccessibility. Code is available at https://github.com/brdav/cma.
Related papers
- Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Refign: Align and Refine for Adaptation of Semantic Segmentation to
Adverse Conditions [78.71745819446176]
Refign is a generic extension to self-training-based UDA methods which leverages cross-domain correspondences.
Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism.
The approach introduces no extra training parameters, minimal computational overhead -- during training only -- and can be used as a drop-in extension to improve any given self-training-based UDA method.
arXiv Detail & Related papers (2022-07-14T11:30:38Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - S4T: Source-free domain adaptation for semantic segmentation via
self-supervised selective self-training [14.086066389856173]
We focus on source-free domain adaptation for semantic segmentation, wherein a source model must adapt itself to a new target domain given only unlabeled target data.
We propose Self-Supervised Selective Self-Training (S4T), a source-free adaptation algorithm that first uses the model's pixel-level predictive consistency across diverse views of each target image along with model confidence to classify pixel predictions as either reliable or unreliable.
S4T matches or improves upon the state-of-the-art in source-free adaptation on 3 standard benchmarks for semantic segmentation within a single epoch of adaptation.
arXiv Detail & Related papers (2021-07-21T15:18:01Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Consistency Regularization with High-dimensional Non-adversarial
Source-guided Perturbation for Unsupervised Domain Adaptation in Segmentation [15.428323201750144]
BiSIDA employs consistency regularization to efficiently exploit information from the unlabeled target dataset.
BiSIDA achieves new state-of-the-art on two commonly-used synthetic-to-real domain adaptation benchmarks.
arXiv Detail & Related papers (2020-09-18T03:26:44Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.