On Hypothesis Transfer Learning of Functional Linear Models
- URL: http://arxiv.org/abs/2206.04277v4
- Date: Thu, 22 Feb 2024 21:09:01 GMT
- Title: On Hypothesis Transfer Learning of Functional Linear Models
- Authors: Haotian Lin, Matthew Reimherr
- Abstract summary: We study the transfer learning (TL) for the functional linear regression (FLR) under the Reproducing Kernel Space (RKHS) framework.
We measure the similarity across tasks using RKHS distance, allowing the type of information being transferred tied to the properties of the imposed RKHS.
Two algorithms are proposed: one conducts the transfer when positive sources are known, while the other leverages aggregation to achieve robust transfer without prior information about the sources.
- Score: 8.557392136621894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the transfer learning (TL) for the functional linear regression
(FLR) under the Reproducing Kernel Hilbert Space (RKHS) framework, observing
the TL techniques in existing high-dimensional linear regression is not
compatible with the truncation-based FLR methods as functional data are
intrinsically infinite-dimensional and generated by smooth underlying
processes. We measure the similarity across tasks using RKHS distance, allowing
the type of information being transferred tied to the properties of the imposed
RKHS. Building on the hypothesis offset transfer learning paradigm, two
algorithms are proposed: one conducts the transfer when positive sources are
known, while the other leverages aggregation techniques to achieve robust
transfer without prior information about the sources. We establish lower bounds
for this learning problem and show the proposed algorithms enjoy a matching
asymptotic upper bound. These analyses provide statistical insights into
factors that contribute to the dynamics of the transfer. We also extend the
results to functional generalized linear models. The effectiveness of the
proposed algorithms is demonstrated on extensive synthetic data as well as a
financial data application.
Related papers
- Adaptive debiased SGD in high-dimensional GLMs with streaming data [4.704144189806667]
We introduce a novel approach to online inference in high-dimensional generalized linear models.
Our method operates in a single-pass mode, significantly reducing both time and space complexity.
We demonstrate that our method, termed the Approximated Debiased Lasso (ADL), not only mitigates the need for the bounded individual probability condition but also significantly improves numerical performance.
arXiv Detail & Related papers (2024-05-28T15:36:48Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Resource-Adaptive Newton's Method for Distributed Learning [16.588456212160928]
This paper introduces a novel and efficient algorithm called RANL, which overcomes the limitations of Newton's method.
Unlike traditional first-order methods, RANL exhibits remarkable independence from the condition number of the problem.
arXiv Detail & Related papers (2023-08-20T04:01:30Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Distributed Gradient Descent for Functional Learning [9.81463654618448]
We propose a novel distributed gradient descent functional learning (DGDFL) algorithm to tackle functional data across numerous local machines (processors) in the framework of reproducing kernel Hilbert space.
Under mild conditions, confidence-based optimal learning rates of DGDFL are obtained without the saturation boundary on the regularity index suffered in previous works in functional regression.
arXiv Detail & Related papers (2023-05-12T12:15:42Z) - Learning Functional Transduction [9.926231893220063]
We show that transductive regression principles can be meta-learned through gradient descent to form efficient in-context neural approximators.
We demonstrate the benefit of our meta-learned transductive approach to model complex physical systems influenced by varying external factors with little data.
arXiv Detail & Related papers (2023-02-01T09:14:28Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Nonlinear Level Set Learning for Function Approximation on Sparse Data
with Applications to Parametric Differential Equations [6.184270985214254]
"Nonlinear Level set Learning" (NLL) approach is presented for the pointwise prediction of functions which have been sparsely sampled.
The proposed algorithm effectively reduces the input dimension to the theoretical lower bound with minor accuracy loss.
Experiments and applications are presented which compare this modified NLL with the original NLL and the Active Subspaces (AS) method.
arXiv Detail & Related papers (2021-04-29T01:54:05Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.