Neural Spectral Methods: Self-supervised learning in the spectral domain
- URL: http://arxiv.org/abs/2312.05225v2
- Date: Fri, 19 Jan 2024 03:34:11 GMT
- Title: Neural Spectral Methods: Self-supervised learning in the spectral domain
- Authors: Yiheng Du, Nithin Chalapathi, Aditi Krishnapriyan
- Abstract summary: We present Neural Spectral Methods, a technique to solve parametric Partial Equations (PDEs)
Our method uses bases to learn PDE solutions as mappings between spectral coefficients.
Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches terms of speed and accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Neural Spectral Methods, a technique to solve parametric Partial
Differential Equations (PDEs), grounded in classical spectral methods. Our
method uses orthogonal bases to learn PDE solutions as mappings between
spectral coefficients. In contrast to current machine learning approaches which
enforce PDE constraints by minimizing the numerical quadrature of the residuals
in the spatiotemporal domain, we leverage Parseval's identity and introduce a
new training strategy through a \textit{spectral loss}. Our spectral loss
enables more efficient differentiation through the neural network, and
substantially reduces training complexity. At inference time, the computational
cost of our method remains constant, regardless of the spatiotemporal
resolution of the domain. Our experimental results demonstrate that our method
significantly outperforms previous machine learning approaches in terms of
speed and accuracy by one to two orders of magnitude on multiple different
problems. When compared to numerical solvers of the same accuracy, our method
demonstrates a $10\times$ increase in performance speed.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Spectral operator learning for parametric PDEs without data reliance [6.7083321695379885]
We introduce a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing.
The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques.
arXiv Detail & Related papers (2023-10-03T12:37:15Z) - A Spectral Approach for Learning Spatiotemporal Neural Differential
Equations [0.0]
We propose a neural-ODE based method that uses spectral expansions in space to learn unbounded differential equations.
By developing a spectral framework for learning both PDEs and integro-differential equations we extend machine learning methods to apply to DEs and a larger class of problems.
arXiv Detail & Related papers (2023-09-28T03:22:49Z) - Temporal Difference Learning for High-Dimensional PIDEs with Jumps [12.734467096363762]
We introduce a set of Levy processes and construct a corresponding reinforcement learning model.
To simulate the entire process, we use deep neural networks to represent the solutions and non-local terms of the equations.
The relative error of the method reaches O(10-3) in 100-dimensional experiments and O(10-4) in one-dimensional pure jump problems.
arXiv Detail & Related papers (2023-07-06T04:27:16Z) - Locally Regularized Neural Differential Equations: Some Black Boxes Were
Meant to Remain Closed! [3.222802562733787]
Implicit layer deep learning techniques, like Neural Differential Equations, have become an important modeling framework.
We develop two sampling strategies to trade off between performance and training time.
Our method reduces the number of function evaluations to 0.556-0.733x and accelerates predictions by 1.3-2x.
arXiv Detail & Related papers (2023-03-03T23:31:15Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - On Learning Rates and Schr\"odinger Operators [105.32118775014015]
We present a general theoretical analysis of the effect of the learning rate.
We find that the learning rate tends to zero for a broad non- neural class functions.
arXiv Detail & Related papers (2020-04-15T09:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.