Featured Reproducing Kernel Banach Spaces for Learning and Neural Networks
- URL: http://arxiv.org/abs/2602.07141v1
- Date: Fri, 06 Feb 2026 19:29:08 GMT
- Title: Featured Reproducing Kernel Banach Spaces for Learning and Neural Networks
- Authors: Isabel de la Higuera, Francisco Herrera, M. Victoria Velasco,
- Abstract summary: Reproducing kernel Hilbert spaces provide a foundational framework for kernel-based learning.<n>Many modern learning models, including fixed-architecture neural networks equipped with non-quadratic norms, naturally give rise to non-Hilbertian geometries.<n>We develop a functional-analytic framework for learning in Banach spaces based on the notion of featured kernel Banach spaces.
- Score: 3.483960518158563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reproducing kernel Hilbert spaces provide a foundational framework for kernel-based learning, where regularization and interpolation problems admit finite-dimensional solutions through classical representer theorems. Many modern learning models, however -- including fixed-architecture neural networks equipped with non-quadratic norms -- naturally give rise to non-Hilbertian geometries that fall outside this setting. In Banach spaces, continuity of point-evaluation functionals alone is insufficient to guarantee feature representations or kernel-based learning formulations. In this work, we develop a functional-analytic framework for learning in Banach spaces based on the notion of featured reproducing kernel Banach spaces. We identify the precise structural conditions under which feature maps, kernel constructions, and representer-type results can be recovered beyond the Hilbertian regime. Within this framework, supervised learning is formulated as a minimal-norm interpolation or regularization problem, and existence results together with conditional representer theorems are established. We further extend the theory to vector-valued featured reproducing kernel Banach spaces and show that fixed-architecture neural networks naturally induce special instances of such spaces. This provides a unified function-space perspective on kernel methods and neural networks and clarifies when kernel-based learning principles extend beyond reproducing kernel Hilbert spaces.
Related papers
- Notes on Kernel Methods in Machine Learning [0.8435614464136675]
We develop the theory of positive definite kernels, reproducing kernel Hilbert spaces (RKHS), and Hilbert-Schmidt operators.<n>We also introduce kernel density estimation, kernel embeddings of distributions, and the Maximum Mean Discrepancy (MMD)<n>The exposition is designed to serve as a foundation for more advanced topics, including Gaussian processes, kernel Bayesian inference, and functional analytic approaches to modern machine learning.
arXiv Detail & Related papers (2025-11-18T13:29:07Z) - Geometric Neural Process Fields [58.77241763774756]
Geometric Neural Process Fields (G-NPF) is a probabilistic framework for neural radiance fields that explicitly captures uncertainty.<n>Building on these bases, we design a hierarchical latent variable model, allowing G-NPF to integrate structural information across multiple spatial levels.<n> Experiments on novel-view synthesis for 3D scenes, as well as 2D image and 1D signal regression, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2025-02-04T14:17:18Z) - Novel Kernel Models and Exact Representor Theory for Neural Networks Beyond the Over-Parameterized Regime [52.00917519626559]
This paper presents two models of neural-networks and their training applicable to neural networks of arbitrary width, depth and topology.
We also present an exact novel representor theory for layer-wise neural network training with unregularized gradient descent in terms of a local-extrinsic neural kernel (LeNK)
This representor theory gives insight into the role of higher-order statistics in neural network training and the effect of kernel evolution in neural-network kernel models.
arXiv Detail & Related papers (2024-05-24T06:30:36Z) - Neural reproducing kernel Banach spaces and representer theorems for deep networks [14.902126718612648]
We show that deep neural networks define reproducing suitable kernel Banach spaces.<n>We derive representer theorems that justify the finite architectures commonly employed in applications.
arXiv Detail & Related papers (2024-03-13T17:51:02Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Understanding neural networks with reproducing kernel Banach spaces [20.28372804772848]
Characterizing function spaces corresponding to neural networks can provide a way to understand their properties.
We prove a representer theorem for a wide class of reproducing kernel Banach spaces.
For a suitable class of ReLU activation functions, the norm in the corresponding kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure.
arXiv Detail & Related papers (2021-09-20T17:32:30Z) - Analysis of Regularized Learning in Banach Spaces for Linear-functional Data [6.396892356366013]
This article delves into the study of the theory of regularized learning in Banach spaces for linear-functional data.<n>Regularized learning is designed to minimize regularized empirical risks over a Banach space.
arXiv Detail & Related papers (2021-09-07T15:51:12Z) - Complexity-based speciation and genotype representation for
neuroevolution [81.21462458089142]
This paper introduces a speciation principle for neuroevolution where evolving networks are grouped into species based on the number of hidden neurons.
The proposed speciation principle is employed in several techniques designed to promote and preserve diversity within species and in the ecosystem as a whole.
arXiv Detail & Related papers (2020-10-11T06:26:56Z) - Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks [61.07202852469595]
We present Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks.
Our method achieves state-of-the-art results, outperforming recent neural network-based techniques and widely used Poisson Surface Reconstruction.
arXiv Detail & Related papers (2020-06-24T14:54:59Z) - Banach Space Representer Theorems for Neural Networks and Ridge Splines [17.12783792226575]
We develop a variational framework to understand the properties of the functions learned by neural networks fit to data.
We derive a representer theorem showing that finite-width, single-hidden layer neural networks are solutions to inverse problems.
arXiv Detail & Related papers (2020-06-10T02:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.