Random Gradient-Free Optimization in Infinite Dimensional Spaces
- URL: http://arxiv.org/abs/2512.20566v1
- Date: Tue, 23 Dec 2025 18:09:49 GMT
- Title: Random Gradient-Free Optimization in Infinite Dimensional Spaces
- Authors: Caio Lins Peixoto, Daniel Csillag, Bernardo F. P. da Costa, Yuri F. Saporito,
- Abstract summary: We propose a random gradient-free method for optimization in infinite dimensional Hilbert spaces.<n>Our framework requires only the computation of directional derivatives and a pre-basis for the Hilbert space domain.<n>We showcase the use of our method to solve partial differential equations la physics informed neural networks.
- Score: 3.8031924942083517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a random gradient-free method for optimization in infinite dimensional Hilbert spaces, applicable to functional optimization in diverse settings. Though such problems are often solved through finite-dimensional gradient descent over a parametrization of the functions, such as neural networks, an interesting alternative is to instead perform gradient descent directly in the function space by leveraging its Hilbert space structure, thus enabling provable guarantees and fast convergence. However, infinite-dimensional gradients are often hard to compute in practice, hindering the applicability of such methods. To overcome this limitation, our framework requires only the computation of directional derivatives and a pre-basis for the Hilbert space domain, i.e., a linearly-independent set whose span is dense in the Hilbert space. This fully resolves the tractability issue, as pre-bases are much more easily obtained than full orthonormal bases or reproducing kernels -- which may not even exist -- and individual directional derivatives can be easily computed using forward-mode scalar automatic differentiation. We showcase the use of our method to solve partial differential equations à la physics informed neural networks (PINNs), where it effectively enables provable convergence.
Related papers
- Group Orthogonalized Policy Optimization:Group Policy Optimization as Orthogonal Projection in Hilbert Space [0.0]
We present a new alignment algorithm for large language models derived from the geometry of Hilbert function spaces.<n>GOPO lifts alignment into the Hilbert space L2(pi_k) of square-integrable functions.<n>Because group-normalized advantages sum to zero, the Lagrange multiplier enforcing probability conservation vanishes exactly.
arXiv Detail & Related papers (2026-02-24T12:59:32Z) - Gradient-Based Non-Linear Inverse Learning [2.6149030745627644]
We study statistical inverse learning in the context of nonlinear inverse problems under random design.<n>We employ gradient descent (GD) and descent gradient (SGD) with mini-batching, both using constant step sizes.<n>Our analysis derives convergence rates for both algorithms under classical a priori assumptions on the smoothness of the target function.
arXiv Detail & Related papers (2024-12-21T22:38:17Z) - Kernel Operator-Theoretic Bayesian Filter for Nonlinear Dynamical Systems [25.922732994397485]
We propose a machine-learning alternative based on a functional Bayesian perspective for operator-theoretic modeling.
This formulation is directly done in an infinite-dimensional space of linear operators or Hilbert space with universal approximation property.
We demonstrate that this practical approach can obtain accurate results and outperform finite-dimensional Koopman decomposition.
arXiv Detail & Related papers (2024-10-31T20:31:31Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Random Smoothing Regularization in Kernel Gradient Descent Learning [24.383121157277007]
We present a framework for random smoothing regularization that can adaptively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
Our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality.
arXiv Detail & Related papers (2023-05-05T13:37:34Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.