Spectral methods for Neural Integral Equations
- URL: http://arxiv.org/abs/2312.05654v3
- Date: Mon, 25 Mar 2024 04:32:19 GMT
- Title: Spectral methods for Neural Integral Equations
- Authors: Emanuele Zappala,
- Abstract summary: We introduce a framework for neural integral equations based on spectral methods.
We show various theoretical guarantees regarding the approximation capabilities of the model.
We provide numerical experiments to demonstrate the practical effectiveness of the resulting model.
- Score: 0.6993026261767287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural integral equations are deep learning models based on the theory of integral equations, where the model consists of an integral operator and the corresponding equation (of the second kind) which is learned through an optimization procedure. This approach allows to leverage the nonlocal properties of integral operators in machine learning, but it is computationally expensive. In this article, we introduce a framework for neural integral equations based on spectral methods that allows us to learn an operator in the spectral domain, resulting in a cheaper computational cost, as well as in high interpolation accuracy. We study the properties of our methods and show various theoretical guarantees regarding the approximation capabilities of the model, and convergence to solutions of the numerical methods. We provide numerical experiments to demonstrate the practical effectiveness of the resulting model.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Towards Gaussian Process for operator learning: an uncertainty aware resolution independent operator learning algorithm for computational mechanics [8.528817025440746]
This paper introduces a novel Gaussian Process (GP) based neural operator for solving parametric differential equations.
We propose a neural operator-embedded kernel'' wherein the GP kernel is formulated in the latent space learned using a neural operator.
Our results highlight the efficacy of this framework in solving complex PDEs while maintaining robustness in uncertainty estimation.
arXiv Detail & Related papers (2024-09-17T08:12:38Z) - PINNIES: An Efficient Physics-Informed Neural Network Framework to Integral Operator Problems [0.0]
This paper introduces an efficient tensor-vector product technique for the approximation of integral operators within physics-informed deep learning frameworks.
We demonstrate the applicability of this method to both Fredholm and Volterra integral operators.
We also propose a fast matrix-vector product algorithm for efficiently computing the fractional Caputo derivative.
arXiv Detail & Related papers (2024-09-03T13:43:58Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - Neural Integral Equations [3.087238735145305]
We introduce a method for learning unknown integral operators from data using an IE solver.
We also present Attentional Neural Integral Equations (ANIE), which replaces the integral with self-attention.
arXiv Detail & Related papers (2022-09-30T02:32:17Z) - Personalized Algorithm Generation: A Case Study in Meta-Learning ODE
Integrators [6.457555233038933]
We study the meta-learning of numerical algorithms for scientific computing.
We develop a machine learning approach that automatically learns solvers for initial value problems.
arXiv Detail & Related papers (2021-05-04T05:42:33Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.