The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions
- URL: http://arxiv.org/abs/2512.11138v1
- Date: Thu, 11 Dec 2025 21:57:21 GMT
- Title: The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions
- Authors: Vladimer Khasia,
- Abstract summary: Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields.<n>We introduce a differentiable spectral method grounded in the Generalized Analytic theory.<n>We show that our method can effectively act as a physics-informed spectral filter.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields, yet they often suffer from spectral bias and the computational expense of non-convex optimization. We introduce the Vekua Layer (VL), a differentiable spectral method grounded in the classical theory of Generalized Analytic Functions. By restricting the hypothesis space to the kernel of the governing differential operator -- specifically utilizing Harmonic and Fourier-Bessel bases -- the VL transforms the learning task from iterative gradient descent to a strictly convex least-squares problem solved via linear projection. We evaluate the VL against Sinusoidal Representation Networks (SIRENs) on homogeneous elliptic Partial Differential Equations (PDEs). Our results demonstrate that the VL achieves machine precision ($\text{MSE} \approx 10^{-33}$) on exact reconstruction tasks and exhibits superior stability in the presence of incoherent sensor noise ($\text{MSE} \approx 0.03$), effectively acting as a physics-informed spectral filter. Furthermore, we show that the VL enables "holographic" extrapolation of global fields from partial boundary data via analytic continuation, a capability absent in standard coordinate-based approximations.
Related papers
- SpectraKAN: Conditioning Spectral Operators [21.190440188964452]
We introduce SpectraKAN, a neural operator that conditions the spectral operator on the input itself, turning spectral into an input-conditioned integral operator.<n>This is achieved by extracting a compact global representation from static-temporal history and using it to modulate a multi-scale trunk via single-query cross-attention.<n>Across diverse PDE benchmarks, SpectraKAN achieves state-of-the-art performance, reducing RMSE by up to 49% over strong baselines, with particularly large gains on challenging-temporal prediction tasks.
arXiv Detail & Related papers (2026-02-05T01:30:25Z) - Hessian Spectral Analysis at Foundation Model Scale [1.9244735303181757]
We show that faithful spectral analysis of the true Hessian is tractable at frontier scale.<n>We produce the first large-scale spectral density estimates beyond the sub-10B regime.
arXiv Detail & Related papers (2026-01-31T16:57:06Z) - SFO: Learning PDE Operators via Spectral Filtering [25.390393080966422]
We introduce a neural operator that parameterizes integral kernels using the Universal Spectral Basis (USB)<n>By learning only the spectral coefficients of rapidly decaying eigenvalues, SFO achieves a highly efficient representation.
arXiv Detail & Related papers (2026-01-23T10:45:52Z) - Analysis of Fourier Neural Operators via Effective Field Theory [11.824913874212802]
We present a systematic effective field theory analysis of FNOs in an infinite dimensional function space.<n>We show that nonlinear activations inevitably couple frequency inputs to high frequency modes that are otherwise discarded by spectral truncation.<n>Our results quantify how nonlinearity enables neural operators to capture non-trivial features and explain why scale invariant activations and residual connections enhance feature learning in FNOs.
arXiv Detail & Related papers (2025-07-29T14:10:46Z) - DimINO: Dimension-Informed Neural Operator Learning [41.37905663176428]
DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - On the Identification and Optimization of Nonsmooth Superposition
Operators in Semilinear Elliptic PDEs [3.045851438458641]
We study an infinite-dimensional optimization problem that aims to identify the Nemytskii operator in the nonlinear part of a prototypical semilinear elliptic partial differential equation (PDE)
In contrast to previous works, we consider this identification problem in a low-regularity regime in which the function inducing the Nemytskii operator is a-priori only known to be an element of $H leakyloc(mathbbR)$.
arXiv Detail & Related papers (2023-06-08T13:33:20Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - Implicit Bias of MSE Gradient Optimization in Underparameterized Neural
Networks [0.0]
We study the dynamics of a neural network in function space when optimizing the mean squared error via gradient flow.
We show that the network learns eigenfunctions of an integral operator $T_Kinfty$ determined by the Neural Tangent Kernel (NTK)
We conclude that damped deviations offers a simple and unifying perspective of the dynamics when optimizing the squared error.
arXiv Detail & Related papers (2022-01-12T23:28:41Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - On the Convergence Rate of Projected Gradient Descent for a
Back-Projection based Objective [58.33065918353532]
We consider a back-projection based fidelity term as an alternative to the common least squares (LS)
We show that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms.
arXiv Detail & Related papers (2020-05-03T00:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.