Polyharmonic Spline Packages: Composition, Efficient Procedures for Computation and Differentiation
- URL: http://arxiv.org/abs/2512.16718v1
- Date: Thu, 18 Dec 2025 16:21:09 GMT
- Title: Polyharmonic Spline Packages: Composition, Efficient Procedures for Computation and Differentiation
- Authors: Yuriy N. Bakhvalov,
- Abstract summary: This paper proposes a cascade architecture built from packages of polyharmonic splines.<n> Efficient matrix procedures are presented for forward computation and end-to-end differentiation through the cascade.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a previous paper it was shown that a machine learning regression problem can be solved within the framework of random function theory, with the optimal kernel analytically derived from symmetry and indifference principles and coinciding with a polyharmonic spline. However, a direct application of that solution is limited by O(N^3) computational cost and by a breakdown of the original theoretical assumptions when the input space has excessive dimensionality. This paper proposes a cascade architecture built from packages of polyharmonic splines that simultaneously addresses scalability and is theoretically justified for problems with unknown intrinsic low dimensionality. Efficient matrix procedures are presented for forward computation and end-to-end differentiation through the cascade.
Related papers
- Symplectic Optimization on Gaussian States [0.0]
We introduce a symplectic optimization framework to solve the bosonic ground-state problem.<n>The framework provides a foundation for large-scale approximate treatments of weakly non-quadratic interactions.
arXiv Detail & Related papers (2026-01-28T18:31:50Z) - Accelerated training of Gaussian processes using banded square exponential covariances [0.0]
We propose a novel approach to computationally efficient GP training based on the observation that square-exponential (SE) covariance matrices contain several off-diagonal entries extremely close to zero.<n>We construct a principled procedure to eliminate those entries to produce a emphbanded-matrix approximation to the original covariance, whose inverse and determinant can be computed at a reduced computational cost.
arXiv Detail & Related papers (2026-01-26T22:35:20Z) - Solving a Machine Learning Regression Problem Based on the Theory of Random Functions [0.0]
This paper studies a machine learning regression problem as a multivariate approximation problem using the framework of the theory of random functions.<n>An ab initio derivation of a regression method is proposed, starting from postulates of indifference.
arXiv Detail & Related papers (2025-12-14T15:12:18Z) - A Representer Theorem for Hawkes Processes via Penalized Least Squares Minimization [31.876688992403647]
The representer theorem is a cornerstone of kernel methods, which aim to estimate latent functions in reproducing kernel Hilbert spaces.<n>We show that a novel form of representer theorem emerges: a family of transformed kernels can be defined via a system of simultaneous integral equations.<n>Remarkably, the dual coefficients are all analytically fixed to unity, obviating the need to solve a costly optimization problem to obtain the dual coefficients.
arXiv Detail & Related papers (2025-10-10T02:00:56Z) - Gaussian process surrogate with physical law-corrected prior for multi-coupled PDEs defined on irregular geometry [3.3798563347021093]
Parametric partial differential equations (PDEs) are fundamental mathematical tools for modeling complex physical systems.<n>We propose a novel physical law-corrected prior Gaussian process (LC-prior GP) surrogate modeling framework.
arXiv Detail & Related papers (2025-09-01T02:40:32Z) - Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Neural Chaos: A Spectral Stochastic Neural Operator [0.0]
Polynomial Chaos Expansion (PCE) is widely recognized as a to-go method for constructing varying solutions in both intrusive and non-intrusive ways.<n>We propose an algorithm that identifies neural network (NN) basis functions in a purely data-driven manner.<n>We demonstrate the effectiveness of the proposed scheme through several numerical examples.
arXiv Detail & Related papers (2025-02-17T14:30:46Z) - Structured Regularization for Constrained Optimization on the SPD Manifold [1.1126342180866644]
We introduce a class of structured regularizers, based on symmetric gauge functions, which allow for solving constrained optimization on the SPD manifold with faster unconstrained methods.
We show that our structured regularizers can be chosen to preserve or induce desirable structure, in particular convexity and "difference of convex" structure.
arXiv Detail & Related papers (2024-10-12T22:11:22Z) - Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [53.03951222945921]
We analyze smoothed (perturbed) policies, adding controlled random perturbations to the direction used by the linear oracle.<n>Our main contribution is a generalization bound that decomposes the excess risk into perturbation bias, statistical estimation error, and optimization error.<n>We illustrate the scope of the results on applications such as vehicle scheduling, highlighting how smoothing enables both tractable training and controlled generalization.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Relative Pose from SIFT Features [50.81749304115036]
We derive a new linear constraint relating the unknown elements of the fundamental matrix and the orientation and scale.
The proposed constraint is tested on a number of problems in a synthetic environment and on publicly available real-world datasets on more than 80000 image pairs.
arXiv Detail & Related papers (2022-03-15T14:16:39Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.