On-target Adaptation
- URL: http://arxiv.org/abs/2109.01087v1
- Date: Thu, 2 Sep 2021 17:04:18 GMT
- Title: On-target Adaptation
- Authors: Dequan Wang, Shaoteng Liu, Sayna Ebrahimi, Evan Shelhamer, Trevor
Darrell
- Abstract summary: Domain adaptation seeks to mitigate the shift between training on the emphsource domain and testing on the emphtarget domain.
Most adaptation methods rely on the source data by joint optimization over source data and target data.
We show significant improvement by on-target adaptation, which learns the representation purely from target data.
- Score: 82.77980951331854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation seeks to mitigate the shift between training on the
\emph{source} domain and testing on the \emph{target} domain. Most adaptation
methods rely on the source data by joint optimization over source data and
target data. Source-free methods replace the source data with a source model by
fine-tuning it on target. Either way, the majority of the parameter updates for
the model representation and the classifier are derived from the source, and
not the target. However, target accuracy is the goal, and so we argue for
optimizing as much as possible on the target data. We show significant
improvement by on-target adaptation, which learns the representation purely
from target data while taking only the source predictions for supervision. In
the long-tailed classification setting, we show further improvement by
on-target class distribution learning, which learns the (im)balance of classes
from target data.
Related papers
- Mitigating the Bias in the Model for Continual Test-Time Adaptation [32.33057968481597]
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data.
This paper mitigates this issue to improve performance in the CTA scenario.
arXiv Detail & Related papers (2024-03-02T23:37:16Z) - MetaAdapt: Domain Adaptive Few-Shot Misinformation Detection via Meta
Learning [10.554043875365155]
We propose MetaAdapt, a meta learning based approach for domain adaptive few-shot misinformation detection.
In particular, we train the initial model with multiple source tasks and compute their similarity scores to the meta task.
As such, MetaAdapt can learn how to adapt the misinformation detection model and exploit the source data for improved performance in the target domain.
arXiv Detail & Related papers (2023-05-22T04:00:38Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Unsupervised Adaptation of Semantic Segmentation Models without Source
Data [14.66682099621276]
We consider the novel problem of unsupervised domain adaptation of source models, without access to the source data for semantic segmentation.
We propose a self-training approach to extract the knowledge from the source model.
Our framework is able to achieve significant performance gains compared to directly applying the source model on the target data.
arXiv Detail & Related papers (2021-12-04T15:13:41Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.