A Class of Geometric Structures in Transfer Learning: Minimax Bounds and
Optimality
- URL: http://arxiv.org/abs/2202.11685v1
- Date: Wed, 23 Feb 2022 18:47:53 GMT
- Title: A Class of Geometric Structures in Transfer Learning: Minimax Bounds and
Optimality
- Authors: Xuhui Zhang and Jose Blanchet and Soumyadip Ghosh and Mark S.
Squillante
- Abstract summary: We exploit the geometric structure of the source and target domains for transfer learning.
Our proposed estimator outperforms state-of-the-art transfer learning methods in both moderate- and high-dimensional settings.
- Score: 5.2172436904905535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of transfer learning, observing that previous efforts to
understand its information-theoretic limits do not fully exploit the geometric
structure of the source and target domains. In contrast, our study first
illustrates the benefits of incorporating a natural geometric structure within
a linear regression model, which corresponds to the generalized eigenvalue
problem formed by the Gram matrices of both domains. We next establish a
finite-sample minimax lower bound, propose a refined model interpolation
estimator that enjoys a matching upper bound, and then extend our framework to
multiple source domains and generalized linear models. Surprisingly, as long as
information is available on the distance between the source and target
parameters, negative-transfer does not occur. Simulation studies show that our
proposed interpolation estimator outperforms state-of-the-art transfer learning
methods in both moderate- and high-dimensional settings.
Related papers
- VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Transfer learning with affine model transformation [18.13383101189326]
This paper presents a general class of transfer learning regression called affine model transfer.
It is shown that the affine model transfer broadly encompasses various existing methods, including the most common procedure based on neural feature extractors.
arXiv Detail & Related papers (2022-10-18T10:50:24Z) - Structured Optimal Variational Inference for Dynamic Latent Space Models [16.531262817315696]
We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products plus the intercept of the latent positions.
To balance posterior inference and computational scalability, we consider a structured mean-field variational inference framework.
arXiv Detail & Related papers (2022-09-29T22:10:42Z) - On Hypothesis Transfer Learning of Functional Linear Models [8.557392136621894]
We study the transfer learning (TL) for the functional linear regression (FLR) under the Reproducing Kernel Space (RKHS) framework.
We measure the similarity across tasks using RKHS distance, allowing the type of information being transferred tied to the properties of the imposed RKHS.
Two algorithms are proposed: one conducts the transfer when positive sources are known, while the other leverages aggregation to achieve robust transfer without prior information about the sources.
arXiv Detail & Related papers (2022-06-09T04:50:16Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Inverse Learning of Symmetries [71.62109774068064]
We learn the symmetry transformation with a model consisting of two latent subspaces.
Our approach is based on the deep information bottleneck in combination with a continuous mutual information regulariser.
Our model outperforms state-of-the-art methods on artificial and molecular datasets.
arXiv Detail & Related papers (2020-02-07T13:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.