Pareto Domain Adaptation
- URL: http://arxiv.org/abs/2112.04137v2
- Date: Thu, 9 Dec 2021 06:31:10 GMT
- Title: Pareto Domain Adaptation
- Authors: Fangrui Lv, Jian Liang, Kaixiong Gong, Shuang Li, Chi Harold Liu, Han
Li, Di Liu, Guoren Wang
- Abstract summary: Domain adaptation (DA) attempts to transfer the knowledge from a labeled source domain to an unlabeled target domain.
We propose a new approach to control the overall optimization direction, aiming to cooperatively optimize all training objectives.
- Score: 35.48609986914723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) attempts to transfer the knowledge from a labeled
source domain to an unlabeled target domain that follows different distribution
from the source. To achieve this, DA methods include a source classification
objective to extract the source knowledge and a domain alignment objective to
diminish the domain shift, ensuring knowledge transfer. Typically, former DA
methods adopt some weight hyper-parameters to linearly combine the training
objectives to form an overall objective. However, the gradient directions of
these objectives may conflict with each other due to domain shift. Under such
circumstances, the linear optimization scheme might decrease the overall
objective value at the expense of damaging one of the training objectives,
leading to restricted solutions. In this paper, we rethink the optimization
scheme for DA from a gradient-based perspective. We propose a Pareto Domain
Adaptation (ParetoDA) approach to control the overall optimization direction,
aiming to cooperatively optimize all training objectives. Specifically, to
reach a desirable solution on the target domain, we design a surrogate loss
mimicking target classification. To improve target-prediction accuracy to
support the mimicking, we propose a target-prediction refining mechanism which
exploits domain labels via Bayes' theorem. On the other hand, since prior
knowledge of weighting schemes for objectives is often unavailable to guide
optimization to approach the optimal solution on the target domain, we propose
a dynamic preference mechanism to dynamically guide our cooperative
optimization by the gradient of the surrogate loss on a held-out unlabeled
target dataset. Extensive experiments on image classification and semantic
segmentation benchmarks demonstrate the effectiveness of ParetoDA
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning [26.544837987747766]
We propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast learning.
The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain.
arXiv Detail & Related papers (2023-06-02T15:09:19Z) - Label Alignment Regularization for Distribution Shift [63.228879525056904]
Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
We propose a regularization method for unsupervised domain adaptation that encourages alignment between the predictions in the target domain and its top singular vectors.
We report improved performance over domain adaptation baselines in well-known tasks such as MNIST-USPS domain adaptation and cross-lingual sentiment analysis.
arXiv Detail & Related papers (2022-11-27T22:54:48Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation [42.16718847243166]
Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge from a labeled source domain to an unlabeled target domain.
Traditionally, subspace-based methods form an important class of solutions to this problem.
This paper revisits the use of subspace alignment for UDA and proposes a novel adaptation algorithm that consistently leads to improved generalization.
arXiv Detail & Related papers (2022-01-05T20:16:38Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - MetaAlign: Coordinating Domain Alignment and Classification for
Unsupervised Domain Adaptation [84.90801699807426]
This paper proposes an effective meta-optimization based strategy dubbed MetaAlign.
We treat the domain alignment objective and the classification objective as the meta-train and meta-test tasks in a meta-learning scheme.
Experimental results demonstrate the effectiveness of our proposed method on top of various alignment-based baseline approaches.
arXiv Detail & Related papers (2021-03-25T03:16:05Z) - Domain Adaptation by Class Centroid Matching and Local Manifold
Self-Learning [8.316259570013813]
We propose a novel domain adaptation approach, which can thoroughly explore the data distribution structure of target domain.
We regard the samples within the same cluster in target domain as a whole rather than individuals and assigns pseudo-labels to the target cluster by class centroid matching.
An efficient iterative optimization algorithm is designed to solve the objective function of our proposal with theoretical convergence guarantee.
arXiv Detail & Related papers (2020-03-20T16:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.