Online Regularized Learning Algorithm for Functional Data
- URL: http://arxiv.org/abs/2211.13549v1
- Date: Thu, 24 Nov 2022 11:56:10 GMT
- Title: Online Regularized Learning Algorithm for Functional Data
- Authors: Yuan Mao and Zheng-Chu Guo
- Abstract summary: This paper considers online regularized learning algorithm in Hilbert kernel spaces.
It shows that convergence rates of both prediction error and estimation error with constant step-size are competitive with those in the literature.
- Score: 2.5382095320488673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, functional linear models have attracted growing attention in
statistics and machine learning, with the aim of recovering the slope function
or its functional predictor. This paper considers online regularized learning
algorithm for functional linear models in reproducing kernel Hilbert spaces.
Convergence analysis of excess prediction error and estimation error are
provided with polynomially decaying step-size and constant step-size,
respectively. Fast convergence rates can be derived via a capacity dependent
analysis. By introducing an explicit regularization term, we uplift the
saturation boundary of unregularized online learning algorithms when the
step-size decays polynomially, and establish fast convergence rates of
estimation error without capacity assumption. However, it remains an open
problem to obtain capacity independent convergence rates for the estimation
error of the unregularized online learning algorithm with decaying step-size.
It also shows that convergence rates of both prediction error and estimation
error with constant step-size are competitive with those in the literature.
Related papers
- Multivariate Probabilistic Time Series Forecasting with Correlated Errors [17.212396544233307]
We introduce a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models.
We evaluate our method on probabilistic models built on RNNs and Transformer architectures.
arXiv Detail & Related papers (2024-02-01T20:27:19Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Reinforcement Learning with Function Approximation: From Linear to
Nonlinear [4.314956204483073]
This paper reviews recent results on error analysis for reinforcement learning algorithms in linear or nonlinear approximation settings.
We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function.
arXiv Detail & Related papers (2023-02-20T00:31:18Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Learning Asynchronous and Error-prone Longitudinal Data via Functional
Calibration [4.446626375802735]
We propose a new functional calibration approach to efficiently learn longitudinal covariate processes based on functional data with measurement error.
For regression with time-invariant coefficients, our estimator is root-n consistent, and root-n normal; for time-varying coefficient models, our estimator has the optimal varying coefficient model convergence rate.
The feasibility and usability of the proposed methods are verified by simulations and an application to the Study of Women's Health Across the Nation.
arXiv Detail & Related papers (2022-09-28T03:27:31Z) - Capacity dependent analysis for functional online learning algorithms [8.748563565641279]
This article provides convergence analysis of online gradient descent algorithms for functional linear models.
We show that capacity assumption can alleviate the saturation of the convergence rate as the regularity of the target function increases.
arXiv Detail & Related papers (2022-09-25T11:21:18Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Error Bounds of the Invariant Statistics in Machine Learning of Ergodic
It\^o Diffusions [8.627408356707525]
We study the theoretical underpinnings of machine learning of ergodic Ito diffusions.
We deduce a linear dependence of the errors of one-point and two-point invariant statistics on the error in the learning of the drift and diffusion coefficients.
arXiv Detail & Related papers (2021-05-21T02:55:59Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.