Physics-informed Spectral Learning: the Discrete Helmholtz--Hodge
Decomposition
- URL: http://arxiv.org/abs/2302.11061v1
- Date: Tue, 21 Feb 2023 23:33:29 GMT
- Title: Physics-informed Spectral Learning: the Discrete Helmholtz--Hodge
Decomposition
- Authors: Luis Espath, Pouria Behnoudfar, and Raul Tempone
- Abstract summary: We further develop the Physics-informed Spectral Learning (PiSL) by Espath et al.
Within this physics-informed statistical learning framework, we adaptively build a sparse set of Fourier basis functions with corresponding coefficients.
Our PiSL computational framework enjoys spectral (exponential) convergence.
We assess the capabilities of our method in various numerical examples including the Storm of the Century' with satellite data from 1993.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we further develop the Physics-informed Spectral Learning
(PiSL) by Espath et al. \cite{Esp21} based on a discrete $L^2$ projection to
solve the discrete Hodge--Helmholtz decomposition from sparse data. Within this
physics-informed statistical learning framework, we adaptively build a sparse
set of Fourier basis functions with corresponding coefficients by solving a
sequence of minimization problems where the set of basis functions is augmented
greedily at each optimization problem. Moreover, our PiSL computational
framework enjoys spectral (exponential) convergence. We regularize the
minimization problems with the seminorm of the fractional Sobolev space in a
Tikhonov fashion. In the Fourier setting, the divergence- and curl-free
constraints become a finite set of linear algebraic equations. The proposed
computational framework combines supervised and unsupervised learning
techniques in that we use data concomitantly with the projection onto
divergence- and curl-free spaces. We assess the capabilities of our method in
various numerical examples including the `Storm of the Century' with satellite
data from 1993.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - ConDiff: A Challenging Dataset for Neural Solvers of Partial Differential Equations [42.69799418639716]
We present ConDiff, a novel dataset for scientific machine learning.
ConDiff focuses on the diffusion equation with varying coefficients, a fundamental problem in many applications of parametric partial differential equations (PDEs)
This class of problems is not only of great academic interest, but is also the basis for describing various environmental and industrial problems.
In this way, ConDiff shortens the gap with real-world problems while remaining fully synthetic and easy to use.
arXiv Detail & Related papers (2024-06-07T07:35:14Z) - Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Spectral operator learning for parametric PDEs without data reliance [6.7083321695379885]
We introduce a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing.
The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques.
arXiv Detail & Related papers (2023-10-03T12:37:15Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Minimax Optimal Kernel Operator Learning via Multilevel Training [11.36492861074981]
We study the statistical limit of learning a Hilbert-Schmidt operator between two infinite-dimensional Sobolev reproducing kernel Hilbert spaces.
We develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces.
arXiv Detail & Related papers (2022-09-28T21:31:43Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - A Mean-Field Theory for Learning the Sch\"{o}nberg Measure of Radial
Basis Functions [13.503048325896174]
We learn the distribution in the Sch"onberg integral representation of the radial basis functions from training samples.
We prove that in the scaling limits, the empirical measure of the Langevin particles converges to the law of a reflected Ito diffusion-drift process.
arXiv Detail & Related papers (2020-06-23T21:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.