CoVariance Filters and Neural Networks over Hilbert Spaces
- URL: http://arxiv.org/abs/2509.13178v2
- Date: Wed, 17 Sep 2025 14:12:40 GMT
- Title: CoVariance Filters and Neural Networks over Hilbert Spaces
- Authors: Claudio Battiloro, Andrea Cavallo, Elvin Isufi,
- Abstract summary: CoVariance Neural Networks (VNNs) perform graph convolutions on the empirical covariance matrix of signals defined over finite-dimensional Hilbert spaces.<n>We take a first step by introducing a novel convolutional learning framework for signals defined over infinite-dimensional Hilbert spaces.
- Score: 27.36628487671159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CoVariance Neural Networks (VNNs) perform graph convolutions on the empirical covariance matrix of signals defined over finite-dimensional Hilbert spaces, motivated by robustness and transferability properties. Yet, little is known about how these arguments extend to infinite-dimensional Hilbert spaces. In this work, we take a first step by introducing a novel convolutional learning framework for signals defined over infinite-dimensional Hilbert spaces, centered on the (empirical) covariance operator. We constructively define Hilbert coVariance Filters (HVFs) and design Hilbert coVariance Networks (HVNs) as stacks of HVF filterbanks with nonlinear activations. We propose a principled discretization procedure, and we prove that empirical HVFs can recover the Functional PCA (FPCA) of the filtered signals. We then describe the versatility of our framework with examples ranging from multivariate real-valued functions to reproducing kernel Hilbert spaces. Finally, we validate HVNs on both synthetic and real-world time-series classification tasks, showing robust performance compared to MLP and FPCA-based classifiers.
Related papers
- RKHS Representation of Algebraic Convolutional Filters with Integral Operators [111.57971404925486]
In this paper, we develop a theory showing that the range of integral operators naturally induces RKHS convolutional signal models.<n>We show that filtering with integral operators corresponds to iterated box products, giving rise to a unital kernel algebra.<n>Our results establish precise connections between eigendecompositions and RKHS representations in graphon signal processing, extend naturally to directed graphons, and enable novel spatial-spectral localization results.
arXiv Detail & Related papers (2026-02-22T08:28:34Z) - NeuraLSP: An Efficient and Rigorous Neural Left Singular Subspace Preconditioner for Conjugate Gradient Methods [49.84495044725856]
NeuraLSP is a novel neural preconditioner combined with a novel loss metric.<n>Our method exhibits both theoretical guarantees and empirical robustness to rank inflation, up to a 53% speedup.
arXiv Detail & Related papers (2026-01-28T02:15:16Z) - The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions [0.0]
Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields.<n>We introduce a differentiable spectral method grounded in the Generalized Analytic theory.<n>We show that our method can effectively act as a physics-informed spectral filter.
arXiv Detail & Related papers (2025-12-11T21:57:21Z) - Functional Adjoint Sampler: Scalable Sampling on Infinite Dimensional Spaces [22.412483650808728]
We present an optimal control-based diffusion sampler for infinite-dimensional function spaces.<n>We show that it achieves superior transition path sampling performance across synthetic potential and real molecular systems.
arXiv Detail & Related papers (2025-11-09T05:51:03Z) - Projective Kolmogorov Arnold Neural Networks (P-KANs): Entropy-Driven Functional Space Discovery for Interpretable Machine Learning [0.0]
Kolmogorov-Arnold Networks (KANs) relocate learnable nonlinearities from nodes to edges.<n>Current KANs suffer from fundamental inefficiencies due to redundancy in high-dimensional spline parameter spaces.<n>We introduce Projective Kolmogorov-Arnold Networks (P-KANs), a novel training framework that guides edge function discovery.
arXiv Detail & Related papers (2025-09-24T12:15:37Z) - Contraction, Criticality, and Capacity: A Dynamical-Systems Perspective on Echo-State Networks [13.857230672081489]
We present a unified, dynamical-systems treatment that weaves together functional analysis, random attractor theory and recent neuroscientific findings.<n>First, we prove that the Echo-State Property (wash-out of initial conditions) together with global Lipschitz dynamics necessarily yields the Fading-Memory Property.<n>Second, employing a Stone-Weierstrass strategy we give a streamlined proof that ESNs with nonlinear reservoirs and linear read-outs are dense in the Banach space of causal, time-in fading-memory filters.<n>Third, we quantify computational resources via memory-capacity spectrum, show how
arXiv Detail & Related papers (2025-07-24T14:41:18Z) - A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - New Equivalences Between Interpolation and SVMs: Kernels and Structured
Features [22.231455330003328]
We present a new and flexible analysis framework for proving SVP in an arbitrary kernel reproducing Hilbert space with a flexible class of generative models for the labels.
We show that SVP occurs in many interesting settings not covered by prior work, and we leverage these results to prove novel generalization results for kernel SVM classification.
arXiv Detail & Related papers (2023-05-03T17:52:40Z) - Tangent Bundle Convolutional Learning: from Manifolds to Cellular Sheaves and Back [84.61160272624262]
We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation.
Tangent bundle filters admit a spectral representation that generalizes the ones of scalar manifold filters, graph filters and standard convolutional filters in continuous time.
We numerically evaluate the effectiveness of the proposed architecture on various learning tasks.
arXiv Detail & Related papers (2023-03-20T17:57:15Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Unified Fourier-based Kernel and Nonlinearity Design for Equivariant
Networks on Homogeneous Spaces [52.424621227687894]
We introduce a unified framework for group equivariant networks on homogeneous spaces.
We take advantage of the sparsity of Fourier coefficients of the lifted feature fields.
We show that other methods treating features as the Fourier coefficients in the stabilizer subgroup are special cases of our activation.
arXiv Detail & Related papers (2022-06-16T17:59:01Z) - Continuous Generative Neural Networks: A Wavelet-Based Architecture in Function Spaces [1.7205106391379021]
We study Continuous Generative Neural Networks (CGNNs) in the continuous setting.
The architecture is inspired by DCGAN, with one fully connected layer, several convolutional layers and nonlinear activation functions.
We present conditions on the convolutional filters and on the nonlinearity that guarantee that a CGNN is injective.
arXiv Detail & Related papers (2022-05-29T11:06:29Z) - Global convergence of ResNets: From finite to infinite width using
linear parameterization [0.0]
We study Residual Networks (ResNets) in which the residual block has linear parametrization while still being nonlinear.
In this limit, we prove a local Polyak-Lojasiewicz inequality, retrieving the lazy regime.
Our analysis leads to a practical and quantified recipe.
arXiv Detail & Related papers (2021-12-10T13:38:08Z) - Non-parametric Active Learning and Rate Reduction in Many-body Hilbert
Space with Rescaled Logarithmic Fidelity [4.781805457699204]
In quantum and quantum-inspired machine learning, the very first step is to embed the data in quantum space known as Hilbert space.
We propose the rescaled logarithmic fidelity (RLF) and a non-parametric active learning in the quantum space, which we name as RLF-NAL.
Our results imply that the machine learning in the Hilbert space complies with the principles of maximal coding rate reduction.
arXiv Detail & Related papers (2021-07-01T03:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.