Efficient Learning of PDEs via Taylor Expansion and Sparse Decomposition
into Value and Fourier Domains
- URL: http://arxiv.org/abs/2309.07344v1
- Date: Wed, 13 Sep 2023 22:48:30 GMT
- Title: Efficient Learning of PDEs via Taylor Expansion and Sparse Decomposition
into Value and Fourier Domains
- Authors: Md Nasim, Yexiang Xue
- Abstract summary: A limited class of decomposable PDEs have sparse features in the value domain.
We propose Reel, which accelerates the learning of PDEs via random projection.
We provide empirical evidence that our proposed Reel can lead to faster learning of PDE models.
- Score: 12.963163500336066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accelerating the learning of Partial Differential Equations (PDEs) from
experimental data will speed up the pace of scientific discovery. Previous
randomized algorithms exploit sparsity in PDE updates for acceleration. However
such methods are applicable to a limited class of decomposable PDEs, which have
sparse features in the value domain. We propose Reel, which accelerates the
learning of PDEs via random projection and has much broader applicability. Reel
exploits the sparsity by decomposing dense updates into sparse ones in both the
value and frequency domains. This decomposition enables efficient learning when
the source of the updates consists of gradually changing terms across large
areas (sparse in the frequency domain) in addition to a few rapid updates
concentrated in a small set of "interfacial" regions (sparse in the value
domain). Random projection is then applied to compress the sparse signals for
learning. To expand the model applicability, Taylor series expansion is used in
Reel to approximate the nonlinear PDE updates with polynomials in the
decomposable form. Theoretically, we derive a constant factor approximation
between the projected loss function and the original one with poly-logarithmic
number of projected dimensions. Experimentally, we provide empirical evidence
that our proposed Reel can lead to faster learning of PDE models (70-98%
reduction in training time when the data is compressed to 1% of its original
size) with comparable quality as the non-compressed models.
Related papers
- Active Learning for Neural PDE Solvers [18.665448858377694]
Active Learning could help surrogate models reach the same accuracy with smaller training sets.
We introduce AL4PDE, a modular and active learning benchmark.
We show that AL reduces the average error by up to 71% compared to random sampling.
arXiv Detail & Related papers (2024-08-02T18:48:58Z) - Physics-informed deep learning and compressive collocation for high-dimensional diffusion-reaction equations: practical existence theory and numerics [5.380276949049726]
We develop and analyze an efficient high-dimensional Partial Differential Equations solver based on Deep Learning (DL)
We show, both theoretically and numerically, that it can compete with a novel stable and accurate compressive spectral collocation method.
arXiv Detail & Related papers (2024-06-03T17:16:11Z) - Reduced-order modeling for parameterized PDEs via implicit neural
representations [4.135710717238787]
We present a new data-driven reduced-order modeling approach to efficiently solve parametrized partial differential equations (PDEs)
The proposed framework encodes PDE and utilizes a parametrized neural ODE (PNODE) to learn latent dynamics characterized by multiple PDE parameters.
We evaluate the proposed method at a large Reynolds number and obtain up to speedup of O(103) and 1% relative error to the ground truth values.
arXiv Detail & Related papers (2023-11-28T01:35:06Z) - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers [40.097474800631]
Time-dependent partial differential equations (PDEs) are ubiquitous in science and engineering.
Deep neural network based surrogates have gained increased interest.
arXiv Detail & Related papers (2023-08-10T17:53:05Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural
Representations [5.551136447769071]
Excessive runtime of high-fidelity partial differential equation solvers makes them unsuitable for time-critical applications.
We propose to accelerate PDE solvers using reduced-order modeling (ROM)
Our approach builds a smooth, low-dimensional manifold of the continuous vector fields themselves, not their discretization.
arXiv Detail & Related papers (2022-06-06T13:27:21Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.