Variational (Energy-Based) Spectral Learning: A Machine Learning Framework for Solving Partial Differential Equations
- URL: http://arxiv.org/abs/2601.02492v1
- Date: Mon, 05 Jan 2026 19:03:58 GMT
- Title: Variational (Energy-Based) Spectral Learning: A Machine Learning Framework for Solving Partial Differential Equations
- Authors: M. M. Hammad,
- Abstract summary: We introduce variational spectral learning (VSL), a machine learning framework for solving partial differential equations (PDEs)<n>VSL offers a principled bridge between variational PDE theory, spectral discretization, and contemporary machine learning practice.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce variational spectral learning (VSL), a machine learning framework for solving partial differential equations (PDEs) that operates directly in the coefficient space of spectral expansions. VSL offers a principled bridge between variational PDE theory, spectral discretization, and contemporary machine learning practice. The core idea is to recast a given PDE \[ \mathcal{L}u = f \quad \text{in} \quad Q=Ω\times(0,T), \] together with boundary and initial conditions, into differentiable space--time energies built from strong-form least-squares residuals and weak (Galerkin) formulations. The solution is represented as a finite spectral expansion \[ u_N(x,t)=\sum_{n=1}^{N} c_n\,φ_n(x,t), \] where $φ_n$ are tensor-product Chebyshev bases in space and time, with Dirichlet-satisfying spatial modes enforcing homogeneous boundary conditions analytically. This yields a compact linear parameterization in the coefficient vector $\mathbf{c}$, while all PDE complexity is absorbed into the variational energy. We show how to construct strong-form and weak-form space-time functionals, augment them with initial-condition and Tikhonov regularization terms, and minimize the resulting objective with gradient-based optimization. In practice, VSL is implemented in TensorFlow using automatic differentiation and Keras cosine-decay-with-restarts learning-rate schedules, enabling robust optimization of moderately sized coefficient vectors. Numerical experiments on benchmark elliptic and parabolic problems, including one- and two-dimensional Poisson, diffusion, and Burgers-type equations, demonstrate that VSL attains accuracy comparable to classical spectral collocation with Crank-Nicolson time stepping, while providing a differentiable objective suitable for modern optimization tooling.
Related papers
- Rethinking Diffusion Models with Symmetries through Canonicalization with Applications to Molecular Graph Generation [56.361076943802594]
CanonFlow achieves state-of-the-art performance on the challenging GEOM-DRUG dataset, and the advantage remains large in few-step generation.
arXiv Detail & Related papers (2026-02-16T18:58:55Z) - Variational Entropic Optimal Transport [67.76725267984578]
We propose Variational Entropic Optimal Transport (VarEOT) for domain translation problems.<n>VarEOT is based on an exact variational reformulation of the log-partition $log mathbbE[exp(cdot)$ as a tractable generalization over an auxiliary positive normalizer.<n> Experiments on synthetic data and unpaired image-to-image translation demonstrate competitive or improved translation quality.
arXiv Detail & Related papers (2026-02-02T15:48:44Z) - Physics-Informed Chebyshev Polynomial Neural Operator for Parametric Partial Differential Equations [17.758049557300826]
We introduce the Physics-Informed Chebyshev Polynomial Neural Operator (CPNO)<n>CPNO replaces unstable monomial expansions with numerically stable Chebyshev spectral basis.<n> Experiments on benchmark parameterized PDEs show that CPNO achieves superior accuracy, faster convergence, and enhanced robustness to hyper parameters.
arXiv Detail & Related papers (2026-02-02T07:19:56Z) - DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids [73.28614344779076]
We present a differentiable grid-based representation for efficiently solving differential equations (DEs)<n>Our results demonstrate a 5-20x speed-up over coordinate-based methods, solving differential equations in seconds or minutes while maintaining comparable accuracy and compactness.
arXiv Detail & Related papers (2026-01-15T18:59:57Z) - Soft Partition-based KAPI-ELM for Multi-Scale PDEs [0.0]
This work introduces a soft partition-based Kernel-Adaptive Physics-Informed Extreme Learning Machine.<n>A signed-distance-based weighting stabilizes least-squares learning on irregular frequencies.<n>Although demonstrated on steady linear PDEs, the results show that soft-partition kernel adaptation provides a fast, architecture-free approach for multiscale PDEs.
arXiv Detail & Related papers (2026-01-13T16:43:38Z) - The Vekua Layer: Exact Physical Priors for Implicit Neural Representations via Generalized Analytic Functions [0.0]
Implicit Neural Representations (INRs) have emerged as a powerful paradigm for parameterizing physical fields.<n>We introduce a differentiable spectral method grounded in the Generalized Analytic theory.<n>We show that our method can effectively act as a physics-informed spectral filter.
arXiv Detail & Related papers (2025-12-11T21:57:21Z) - Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Variational formulation based on duality to solve partial differential equations: Use of B-splines and machine learning approximants [0.0]
Many partial differential equations (PDEs) do not have an exact, primal variational structure.<n> variational principle based on the dual (Lagrange multiplier) field was proposed.<n>We derive the dual weak form for the linear, one-dimensional, transient convection-diffusion equation.
arXiv Detail & Related papers (2024-12-02T07:53:47Z) - Exact dynamics of quantum dissipative $XX$ models: Wannier-Stark localization in the fragmented operator space [49.1574468325115]
We find an exceptional point at a critical dissipation strength that separates oscillating and non-oscillating decay.
We also describe a different type of dissipation that leads to a single decay mode in the whole operator subspace.
arXiv Detail & Related papers (2024-05-27T16:11:39Z) - Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian
Processes [3.750429354590631]
We present a sparse Cholesky factorization algorithm for dense kernel matrices.
We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs.
arXiv Detail & Related papers (2023-04-03T18:35:28Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer [45.47667026025716]
We propose a novel, robust and accelerated iteration that relies on two key elements.
The convergence and stability of the obtained method, referred to as NAG-GS, are first studied extensively.
We show that NAG-arity is competitive with state-the-art methods such as momentum SGD with weight decay and AdamW for the training of machine learning models.
arXiv Detail & Related papers (2022-09-29T16:54:53Z) - Blending Neural Operators and Relaxation Methods in PDE Numerical Solvers [3.2712166248850685]
HINTS is a hybrid, iterative, numerical, and transferable solver for partial differential equations.
It balances the convergence behavior across the spectrum of eigenmodes by utilizing the spectral bias of DeepONet.
It is flexible with regards to discretizations, computational domain, and boundary conditions.
arXiv Detail & Related papers (2022-08-28T19:07:54Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Closed-Form Diffeomorphic Transformations for Time Series Alignment [0.0]
We present a closed-form expression for the ODE solution and its gradient under continuous piecewise-affine velocity functions.
Results show significant improvements both in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-06-16T12:02:12Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.