Domain Adaptation: Learning Bounds and Algorithms
- URL: http://arxiv.org/abs/0902.3430v3
- Date: Thu, 30 Nov 2023 22:47:15 GMT
- Title: Domain Adaptation: Learning Bounds and Algorithms
- Authors: Yishay Mansour, Mehryar Mohri, Afshin Rostamizadeh
- Abstract summary: We introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions.
We derive novel generalization bounds for domain adaptation for a wide family of loss functions.
We also present a series of novel adaptation bounds for large classes of regularization-based algorithms.
- Score: 80.85426994513541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the general problem of domain adaptation which arises in
a variety of applications where the distribution of the labeled sample
available somewhat differs from that of the test data. Building on previous
work by Ben-David et al. (2007), we introduce a novel distance between
distributions, discrepancy distance, that is tailored to adaptation problems
with arbitrary loss functions. We give Rademacher complexity bounds for
estimating the discrepancy distance from finite samples for different loss
functions. Using this distance, we derive novel generalization bounds for
domain adaptation for a wide family of loss functions. We also present a series
of novel adaptation bounds for large classes of regularization-based
algorithms, including support vector machines and kernel ridge regression based
on the empirical discrepancy. This motivates our analysis of the problem of
minimizing the empirical discrepancy for various loss functions for which we
also give novel algorithms. We report the results of preliminary experiments
that demonstrate the benefits of our discrepancy minimization algorithms for
domain adaptation.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Function Extrapolation with Neural Networks and Its Application for Manifolds [1.4579344926652844]
We train a neural network to incorporate prior knowledge of a function.
By carefully analyzing the problem, we obtain a bound on the error over the extrapolation domain.
arXiv Detail & Related papers (2024-05-17T06:15:26Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Best-Effort Adaptation [62.00856290846247]
We present a new theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights.
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
We report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms.
arXiv Detail & Related papers (2023-05-10T00:09:07Z) - Error-Aware Spatial Ensembles for Video Frame Interpolation [50.63021118973639]
Video frame(VFI) algorithms have improved considerably in recent years due to unprecedented progress in both data-driven algorithms and their implementations.
Recent research has introduced advanced motion estimation or novel warping methods as the means to address challenging VFI scenarios.
This work introduces such a solution. By closely examining the correlation between optical flow and IE, the paper proposes novel error prediction metrics that partition the middle frame into distinct regions corresponding to different IE levels.
arXiv Detail & Related papers (2022-07-25T16:15:38Z) - Domain Generalization via Domain-based Covariance Minimization [4.414778226415752]
We propose a novel variance measurement for multiple domains so as to minimize the difference between conditional distributions across domains.
We show that for small-scale datasets, we are able to achieve better quantitative results indicating better generalization performance over unseen test datasets.
arXiv Detail & Related papers (2021-10-12T19:30:15Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Discrepancy-Based Active Learning for Domain Adaptation [7.283533791778357]
The goal of the paper is to design active learning strategies which lead to domain adaptation under an assumption of domain shift.
We derive bounds for such active learning strategies in terms of Rademacher average and localized discrepancy for general loss functions.
We provide improved versions of the algorithms to address the case of large data sets.
arXiv Detail & Related papers (2021-03-05T15:36:48Z) - Adversarial Weighting for Domain Adaptation in Regression [4.34858896385326]
We present a novel instance-based approach to handle regression tasks in the context of supervised domain adaptation.
We develop an adversarial network algorithm which learns both the source weighting scheme and the task in one feed-forward gradient descent.
arXiv Detail & Related papers (2020-06-15T09:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.