Moment-Based Domain Adaptation: Learning Bounds and Algorithms
- URL: http://arxiv.org/abs/2004.10618v1
- Date: Wed, 22 Apr 2020 14:59:08 GMT
- Title: Moment-Based Domain Adaptation: Learning Bounds and Algorithms
- Authors: Werner Zellinger
- Abstract summary: This thesis contributes to the mathematical foundation of domain adaptation as emerging field in machine learning.
In contrast to classical statistical learning, the framework of domain adaptation takes into account deviations between probability distributions in the training and application setting.
- Score: 1.827510863075184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This thesis contributes to the mathematical foundation of domain adaptation
as emerging field in machine learning. In contrast to classical statistical
learning, the framework of domain adaptation takes into account deviations
between probability distributions in the training and application setting.
Domain adaptation applies for a wider range of applications as future samples
often follow a distribution that differs from the ones of the training samples.
A decisive point is the generality of the assumptions about the similarity of
the distributions. Therefore, in this thesis we study domain adaptation
problems under as weak similarity assumptions as can be modelled by finitely
many moments.
Related papers
- Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - Adversarial Weighting for Domain Adaptation in Regression [4.34858896385326]
We present a novel instance-based approach to handle regression tasks in the context of supervised domain adaptation.
We develop an adversarial network algorithm which learns both the source weighting scheme and the task in one feed-forward gradient descent.
arXiv Detail & Related papers (2020-06-15T09:44:04Z) - On generalization in moment-based domain adaptation [1.8047694351309205]
Domain adaptation algorithms are designed to minimize the misclassification risk of a discriminative model for a target domain with little training data.
Standard approaches measure the adaptation discrepancy based on distance measures between the empirical probability distributions in the source and target domain.
arXiv Detail & Related papers (2020-02-19T16:05:27Z) - Incremental Unsupervised Domain-Adversarial Training of Neural Networks [17.91571291302582]
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
arXiv Detail & Related papers (2020-01-13T09:54:35Z) - Domain Adaptation: Learning Bounds and Algorithms [80.85426994513541]
We introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions.
We derive novel generalization bounds for domain adaptation for a wide family of loss functions.
We also present a series of novel adaptation bounds for large classes of regularization-based algorithms.
arXiv Detail & Related papers (2009-02-19T18:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.