Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2108.11249v1
- Date: Wed, 25 Aug 2021 14:18:59 GMT
- Title: Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation
- Authors: Jogendra Nath Kundu, Akshay Kulkarni, Amit Singh, Varun Jampani, R.
Venkatesh Babu
- Abstract summary: Prior arts assume concurrent access to both labeled source and unlabeled target, making them unsuitable for scenarios demanding source-free adaptation.
In this work, we enable source-free DA by partitioning the task into two: a) source-only domain generalization and b) source-free target adaptation.
We introduce a novel conditional prior-enforcing auto-encoder that discourages spatial irregularities, thereby enhancing the pseudo-label quality.
- Score: 78.38321096371106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (DA) has gained substantial interest in
semantic segmentation. However, almost all prior arts assume concurrent access
to both labeled source and unlabeled target, making them unsuitable for
scenarios demanding source-free adaptation. In this work, we enable source-free
DA by partitioning the task into two: a) source-only domain generalization and
b) source-free target adaptation. Towards the former, we provide theoretical
insights to develop a multi-head framework trained with a virtually extended
multi-source dataset, aiming to balance generalization and specificity. Towards
the latter, we utilize the multi-head framework to extract reliable target
pseudo-labels for self-training. Additionally, we introduce a novel conditional
prior-enforcing auto-encoder that discourages spatial irregularities, thereby
enhancing the pseudo-label quality. Experiments on the standard
GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes benchmarks show our superiority
even against the non-source-free prior-arts. Further, we show our compatibility
with online adaptation enabling deployment in a sequentially changing
environment.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Feed-Forward Latent Domain Adaptation [17.71179872529747]
We study a new highly-practical problem setting that enables resource-constrained edge devices to adapt a pre-trained model to their local data distributions.
Considering limitations of edge devices, we aim to only use a pre-trained model and adapt it in a feed-forward way, without using back-propagation and without access to the source data.
Our solution is to meta-learn a network capable of embedding the mixed-relevance target dataset and dynamically adapting inference for target examples using cross-attention.
arXiv Detail & Related papers (2022-07-15T17:37:42Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Amplitude Spectrum Transformation for Open Compound Domain Adaptive
Semantic Segmentation [62.68759523116924]
Open compound domain adaptation (OCDA) has emerged as a practical adaptation setting.
We propose a novel feature space Amplitude Spectrum Transformation (AST)
arXiv Detail & Related papers (2022-02-09T05:40:34Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.