A survey on domain adaptation theory: learning bounds and theoretical
guarantees
- URL: http://arxiv.org/abs/2004.11829v6
- Date: Wed, 13 Jul 2022 20:21:42 GMT
- Title: A survey on domain adaptation theory: learning bounds and theoretical
guarantees
- Authors: Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Youn\`es
Bennani
- Abstract summary: The main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning.
In this sub-field, the data distribution is assumed to change across the training and the test data, while the learning task remains the same.
We provide a first up-to-date description of existing results related to domain adaptation problem.
- Score: 17.71634393160982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: All famous machine learning algorithms that comprise both supervised and
semi-supervised learning work well only under a common assumption: the training
and test data follow the same distribution. When the distribution changes, most
statistical models must be reconstructed from newly collected data, which for
some applications can be costly or impossible to obtain. Therefore, it has
become necessary to develop approaches that reduce the need and the effort to
obtain new labeled samples by exploiting data that are available in related
areas, and using these further across similar fields. This has given rise to a
new machine learning framework known as transfer learning: a learning setting
inspired by the capability of a human being to extrapolate knowledge across
tasks to learn more efficiently. Despite a large amount of different transfer
learning scenarios, the main objective of this survey is to provide an overview
of the state-of-the-art theoretical results in a specific, and arguably the
most popular, sub-field of transfer learning, called domain adaptation. In this
sub-field, the data distribution is assumed to change across the training and
the test data, while the learning task remains the same. We provide a first
up-to-date description of existing results related to domain adaptation problem
that cover learning bounds based on different statistical learning frameworks.
Related papers
- Domain Generalization through Meta-Learning: A Survey [6.524870790082051]
Deep neural networks (DNNs) have revolutionized artificial intelligence but often lack performance when faced with out-of-distribution (OOD) data.
This survey paper delves into the realm of meta-learning with a focus on its contribution to domain generalization.
arXiv Detail & Related papers (2024-04-03T14:55:17Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Algorithms and Theory for Supervised Gradual Domain Adaptation [19.42476993856205]
We study the problem of supervised gradual domain adaptation, where labeled data from shifting distributions are available to the learner along the trajectory.
Under this setting, we provide the first generalization upper bound on the learning error under mild assumptions.
Our results are algorithm agnostic for a range of loss functions, and only depend linearly on the averaged learning error across the trajectory.
arXiv Detail & Related papers (2022-04-25T13:26:11Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - A Concise Review of Transfer Learning [1.5771347525430772]
Transfer learning aims to boost the performance of a target learner by applying another related source data.
Traditional machine learning and data mining techniques assume that the training and testing data lie from the same feature space and distribution.
arXiv Detail & Related papers (2021-04-05T20:34:55Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - A Survey on Self-supervised Pre-training for Sequential Transfer
Learning in Neural Networks [1.1802674324027231]
Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data.
We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains.
arXiv Detail & Related papers (2020-07-01T22:55:48Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - Minimax Lower Bounds for Transfer Learning with Linear and One-hidden
Layer Neural Networks [27.44348371795822]
We develop a statistical minimax framework to characterize the limits of transfer learning.
We derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data.
arXiv Detail & Related papers (2020-06-16T22:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.