Source-Free Adaptation to Measurement Shift via Bottom-Up Feature
Restoration
- URL: http://arxiv.org/abs/2107.05446v1
- Date: Mon, 12 Jul 2021 14:21:14 GMT
- Title: Source-Free Adaptation to Measurement Shift via Bottom-Up Feature
Restoration
- Authors: Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard
Sch\"olkopf
- Abstract summary: Source-free domain adaptation (SFDA) aims to adapt a model trained on labelled data in a source domain to unlabelled data in a target domain without access to the source-domain data during adaptation.
We propose Feature Restoration (FR) as it seeks to extract features with the same semantics from the target domain as were previously extracted from the source.
We additionally propose Bottom-Up Feature Restoration (BUFR), a bottom-up training scheme for FR which boosts performance by preserving learnt structure in the later layers of a network.
- Score: 6.9871848733878155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Source-free domain adaptation (SFDA) aims to adapt a model trained on
labelled data in a source domain to unlabelled data in a target domain without
access to the source-domain data during adaptation. Existing methods for SFDA
leverage entropy-minimization techniques which: (i) apply only to
classification; (ii) destroy model calibration; and (iii) rely on the source
model achieving a good level of feature-space class-separation in the target
domain. We address these issues for a particularly pervasive type of domain
shift called measurement shift, characterized by a change in measurement system
(e.g. a change in sensor or lighting). In the source domain, we store a
lightweight and flexible approximation of the feature distribution under the
source data. In the target domain, we adapt the feature-extractor such that the
approximate feature distribution under the target data realigns with that saved
on the source. We call this method Feature Restoration (FR) as it seeks to
extract features with the same semantics from the target domain as were
previously extracted from the source. We additionally propose Bottom-Up Feature
Restoration (BUFR), a bottom-up training scheme for FR which boosts performance
by preserving learnt structure in the later layers of a network. Through
experiments we demonstrate that BUFR often outperforms existing SFDA methods in
terms of accuracy, calibration, and data efficiency, while being less reliant
on the performance of the source model in the target domain.
Related papers
- Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
Universal domain adaptation (UniDA) has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach not only achieves state-of-the-art results in all experiments on the DomainNet dataset but also significantly outperforms the existing methods on the challenging VisDA-C dataset.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.