Best-Effort Adaptation
- URL: http://arxiv.org/abs/2305.05816v1
- Date: Wed, 10 May 2023 00:09:07 GMT
- Title: Best-Effort Adaptation
- Authors: Pranjal Awasthi, Corinna Cortes, Mehryar Mohri
- Abstract summary: We present a new theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights.
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
We report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms.
- Score: 62.00856290846247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a problem of best-effort adaptation motivated by several
applications and considerations, which consists of determining an accurate
predictor for a target domain, for which a moderate amount of labeled samples
are available, while leveraging information from another domain for which
substantially more labeled samples are at one's disposal. We present a new and
general discrepancy-based theoretical analysis of sample reweighting methods,
including bounds holding uniformly over the weights. We show how these bounds
can guide the design of learning algorithms that we discuss in detail. We
further show that our learning guarantees and algorithms provide improved
solutions for standard domain adaptation problems, for which few labeled data
or none are available from the target domain. We finally report the results of
a series of experiments demonstrating the effectiveness of our best-effort
adaptation and domain adaptation algorithms, as well as comparisons with
several baselines. We also discuss how our analysis can benefit the design of
principled solutions for fine-tuning.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Globally-Optimal Greedy Experiment Selection for Active Sequential
Estimation [1.1530723302736279]
We study the problem of active sequential estimation, which involves adaptively selecting experiments for sequentially collected data.
The goal is to design experiment selection rules for more accurate model estimation.
We propose a class of greedy experiment selection methods and provide statistical analysis for the maximum likelihood.
arXiv Detail & Related papers (2024-02-13T17:09:29Z) - Variational Disentanglement for Domain Generalization [68.85458536180437]
We propose to tackle the problem of domain generalization by delivering an effective framework named Variational Disentanglement Network (VDN)
VDN is capable of disentangling the domain-specific features and task-specific features, where the task-specific features are expected to be better generalized to unseen but related test data.
arXiv Detail & Related papers (2021-09-13T09:55:32Z) - Adaptive Sampling for Minimax Fair Classification [40.936345085421955]
We propose an adaptive sampling algorithm based on the principle of optimism, and derive theoretical bounds on its performance.
By deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general.
arXiv Detail & Related papers (2021-03-01T04:58:27Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - A Theory of Multiple-Source Adaptation with Limited Target Labeled Data [66.53679520072978]
We show that a new family of algorithms based on model selection ideas benefits from very favorable guarantees in this scenario.
We also report the results of several experiments with our algorithms that demonstrate their practical effectiveness.
arXiv Detail & Related papers (2020-07-19T19:34:48Z) - Adversarial Weighting for Domain Adaptation in Regression [4.34858896385326]
We present a novel instance-based approach to handle regression tasks in the context of supervised domain adaptation.
We develop an adversarial network algorithm which learns both the source weighting scheme and the task in one feed-forward gradient descent.
arXiv Detail & Related papers (2020-06-15T09:44:04Z) - Domain Adaptation: Learning Bounds and Algorithms [80.85426994513541]
We introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions.
We derive novel generalization bounds for domain adaptation for a wide family of loss functions.
We also present a series of novel adaptation bounds for large classes of regularization-based algorithms.
arXiv Detail & Related papers (2009-02-19T18:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.