Domain-shift adaptation via linear transformations
- URL: http://arxiv.org/abs/2201.05282v1
- Date: Fri, 14 Jan 2022 02:49:03 GMT
- Title: Domain-shift adaptation via linear transformations
- Authors: Roberto Vega, Russell Greiner
- Abstract summary: A predictor, $f_A, learned with data from a source domain (A) might not be accurate on a target domain (B) when their distributions are different.
We propose an approach to project the source and target domains into a lower-dimensional, common space.
We show the effectiveness of our approach in simulated data and in binary digit classification tasks, obtaining improvements up to 48% accuracy when correcting for the domain shift in the data.
- Score: 11.541238742226199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A predictor, $f_A : X \to Y$, learned with data from a source domain (A)
might not be accurate on a target domain (B) when their distributions are
different. Domain adaptation aims to reduce the negative effects of this
distribution mismatch. Here, we analyze the case where $P_A(Y\ |\ X) \neq
P_B(Y\ |\ X)$, $P_A(X) \neq P_B(X)$ but $P_A(Y) = P_B(Y)$; where there are
affine transformations of $X$ that makes all distributions equivalent. We
propose an approach to project the source and target domains into a
lower-dimensional, common space, by (1) projecting the domains into the
eigenvectors of the empirical covariance matrices of each domain, then (2)
finding an orthogonal matrix that minimizes the maximum mean discrepancy
between the projections of both domains. For arbitrary affine transformations,
there is an inherent unidentifiability problem when performing unsupervised
domain adaptation that can be alleviated in the semi-supervised case. We show
the effectiveness of our approach in simulated data and in binary digit
classification tasks, obtaining improvements up to 48% accuracy when correcting
for the domain shift in the data.
Related papers
- Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Multi-step domain adaptation by adversarial attack to $\mathcal{H}
\Delta \mathcal{H}$-divergence [73.89838982331453]
In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to improve source accuracy on the target domain.
We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.
arXiv Detail & Related papers (2022-07-18T21:24:05Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path
and Beyond [20.518134448156744]
Gradual domain adaptation (GDA) assumes a path of $(T-1)$ unlabeled intermediate domains bridging the source and target.
We prove a significantly improved generalization bound as $widetildeOleft(varepsilon_0+Oleft(sqrtlog(T)/nright)$, where $Delta$ is the average distributional distance between consecutive domains.
arXiv Detail & Related papers (2022-04-18T07:39:23Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z) - Sparsely-Labeled Source Assisted Domain Adaptation [64.75698236688729]
This paper proposes a novel Sparsely-Labeled Source Assisted Domain Adaptation (SLSA-DA) algorithm.
Due to the label scarcity problem, the projected clustering is conducted on both the source and target domains.
arXiv Detail & Related papers (2020-05-08T15:37:35Z) - Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift [20.533804144992207]
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting.
We propose a new assumption, generalized label shift ($GLS$), to improve robustness against mismatched label distributions.
Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches.
arXiv Detail & Related papers (2020-03-10T00:35:23Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.