Learning Operators by Regularized Stochastic Gradient Descent with Operator-valued Kernels
- URL: http://arxiv.org/abs/2504.18184v1
- Date: Fri, 25 Apr 2025 08:57:38 GMT
- Title: Learning Operators by Regularized Stochastic Gradient Descent with Operator-valued Kernels
- Authors: Jia-Qi Yang, Lei Shi,
- Abstract summary: We investigate regularized convergence encoder descent (SGD) algorithms for estimating nonlinear operators from a Polish space to a separable Hilbert space.<n>We present a new technique for deriving bounds with high probability for general SGD schemes.
- Score: 5.663076715852465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates regularized stochastic gradient descent (SGD) algorithms for estimating nonlinear operators from a Polish space to a separable Hilbert space. We assume that the regression operator lies in a vector-valued reproducing kernel Hilbert space induced by an operator-valued kernel. Two significant settings are considered: an online setting with polynomially decaying step sizes and regularization parameters, and a finite-horizon setting with constant step sizes and regularization parameters. We introduce regularity conditions on the structure and smoothness of the target operator and the input random variables. Under these conditions, we provide a dimension-free convergence analysis for the prediction and estimation errors, deriving both expectation and high-probability error bounds. Our analysis demonstrates that these convergence rates are nearly optimal. Furthermore, we present a new technique for deriving bounds with high probability for general SGD schemes, which also ensures almost-sure convergence. Finally, we discuss potential extensions to more general operator-valued kernels and the encoder-decoder framework.
Related papers
- Learning Operators with Stochastic Gradient Descent in General Hilbert Spaces [5.663076715852465]
This study investigates leveraging gradient descent (SGD) to learn operators between general Hilbert spaces.<n>We establish upper bounds for convergence rates of the SGD algorithm and conduct a minimax lower bound analysis.<n>Applying our analysis to operator learning problems based on vector-valued and real-valued kernel Hilbert spaces yields new convergence results.
arXiv Detail & Related papers (2024-02-07T09:31:01Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Statistical Optimality of Divide and Conquer Kernel-based Functional
Linear Regression [1.7227952883644062]
This paper studies the convergence performance of divide-and-conquer estimators in the scenario that the target function does not reside in the underlying kernel space.
As a decomposition-based scalable approach, the divide-and-conquer estimators of functional linear regression can substantially reduce the algorithmic complexities in time and memory.
arXiv Detail & Related papers (2022-11-20T12:29:06Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Convergence Rates for Learning Linear Operators from Noisy Data [6.4423565043274795]
We study the inverse problem of learning a linear operator on a space from its noisy pointwise evaluations on random input data.
We establish posterior contraction rates with respect to a family of Bochner norms as the number of data tend to infinity lower on the estimation error.
These convergence rates highlight and quantify the difficulty of learning linear operators in comparison with the learning of bounded or compact ones.
arXiv Detail & Related papers (2021-08-27T22:09:53Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Sparse Representations of Positive Functions via First and Second-Order
Pseudo-Mirror Descent [15.340540198612823]
We consider expected risk problems when the range of the estimator is required to be nonnegative.
We develop first and second-order variants of approximation mirror descent employing emphpseudo-gradients.
Experiments demonstrate favorable performance on ingeneous Process intensity estimation in practice.
arXiv Detail & Related papers (2020-11-13T21:54:28Z) - Non-asymptotic Optimal Prediction Error for Growing-dimensional
Partially Functional Linear Models [0.951828574518325]
We show the rate-optimal upper and lower bounds of the prediction error.
An exact upper bound for the excess prediction risk is shown in a non-asymptotic form.
We derive the non-asymptotic minimax lower bound under the regularity assumption of the Kullback-Leibler divergence of the models.
arXiv Detail & Related papers (2020-09-10T08:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.