Automatic Cross-Domain Transfer Learning for Linear Regression
- URL: http://arxiv.org/abs/2005.04088v1
- Date: Fri, 8 May 2020 15:05:37 GMT
- Title: Automatic Cross-Domain Transfer Learning for Linear Regression
- Authors: Liu Xinshun, He Xin, Mao Hui, Liu Jing, Lai Weizhong, Ye Qingwen
- Abstract summary: This paper helps to extend the capability of transfer learning for linear regression problems.
For normal datasets, we assume that some latent domain information is available for transfer learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning research attempts to make model induction transferable
across different domains. This method assumes that specific information
regarding to which domain each instance belongs is known. This paper helps to
extend the capability of transfer learning for linear regression problems to
situations where the domain information is uncertain or unknown; in fact, the
framework can be extended to classification problems. For normal datasets, we
assume that some latent domain information is available for transfer learning.
The instances in each domain can be inferred by different parameters. We obtain
this domain information from the distribution of the regression coefficients
corresponding to the explanatory variable $x$ as well as the response variable
$y$ based on a Dirichlet process, which is more reasonable. As a result, we
transfer not only variable $x$ as usual but also variable $y$, which is
challenging since the testing data have no response value. Previous work mainly
overcomes the problem via pseudo-labelling based on transductive learning,
which introduces serious bias. We provide a novel framework for analysing the
problem and considering this general situation: the joint distribution of
variable $x$ and variable $y$. Furthermore, our method controls the bias well
compared with previous work. We perform linear regression on the new feature
space that consists of different latent domains and the target domain, which is
from the testing data. The experimental results show that the proposed model
performs well on real datasets.
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - The Power and Limitation of Pretraining-Finetuning for Linear Regression
under Covariate Shift [127.21287240963859]
We investigate a transfer learning approach with pretraining on the source data and finetuning based on the target data.
For a large class of linear regression instances, transfer learning with $O(N2)$ source data is as effective as supervised learning with $N$ target data.
arXiv Detail & Related papers (2022-08-03T05:59:49Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - Quantifying and Improving Transferability in Domain Generalization [53.16289325326505]
Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world.
We formally define transferability that one can quantify and compute in domain generalization.
We propose a new algorithm for learning transferable features and test it over various benchmark datasets.
arXiv Detail & Related papers (2021-06-07T14:04:32Z) - An Improved Transfer Model: Randomized Transferable Machine [32.50263074872975]
We propose a new transfer model called Randomized Transferable Machine (RTM) to handle small divergence of domains.
Specifically, we work on the new source and target data learned from existing feature-based transfer methods.
In principle, the more corruptions are made, the higher the probability of the new target data can be covered by the constructed source data populations.
arXiv Detail & Related papers (2020-11-27T09:37:01Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.