Learning Green's functions associated with parabolic partial
differential equations
- URL: http://arxiv.org/abs/2204.12789v1
- Date: Wed, 27 Apr 2022 09:23:39 GMT
- Title: Learning Green's functions associated with parabolic partial
differential equations
- Authors: Nicolas Boull\'e, Seick Kim, Tianyi Shi, Alex Townsend
- Abstract summary: We derive the first theoretically rigorous scheme for learning the associated Green's function $G$.
We extend the low-rank theory of Bebendorf and Hackbusch from elliptic PDEs in dimension $1leq nleq 3$ to parabolic PDEs in any dimensions.
- Score: 1.5293427903448025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given input-output pairs from a parabolic partial differential equation (PDE)
in any spatial dimension $n\geq 1$, we derive the first theoretically rigorous
scheme for learning the associated Green's function $G$. Until now, rigorously
learning Green's functions associated with parabolic operators has been a major
challenge in the field of scientific machine learning because $G$ may not be
square-integrable when $n>1$, and time-dependent PDEs have transient dynamics.
By combining the hierarchical low-rank structure of $G$ together with the
randomized singular value decomposition, we construct an approximant to $G$
that achieves a relative error of
$\smash{\mathcal{O}(\Gamma_\epsilon^{-1/2}\epsilon)}$ in the $L^1$-norm with
high probability by using at most
$\smash{\mathcal{O}(\epsilon^{-\frac{n+2}{2}}\log(1/\epsilon))}$ input-output
training pairs, where $\Gamma_\epsilon$ is a measure of the quality of the
training dataset for learning $G$, and $\epsilon>0$ is sufficiently small.
Along the way, we extend the low-rank theory of Bebendorf and Hackbusch from
elliptic PDEs in dimension $1\leq n\leq 3$ to parabolic PDEs in any dimensions,
which shows that Green's functions associated with parabolic PDEs admit a
low-rank structure on well-separated domains.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Interplay between depth and width for interpolation in neural ODEs [0.0]
We examine the interplay between their width $p$ and number of layer transitions $L$.
In the high-dimensional setting, we demonstrate that $p=O(N)$ neurons are likely sufficient to achieve exact control.
arXiv Detail & Related papers (2024-01-18T11:32:50Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Efficient Sampling of Stochastic Differential Equations with Positive
Semi-Definite Models [91.22420505636006]
This paper deals with the problem of efficient sampling from a differential equation, given the drift function and the diffusion matrix.
It is possible to obtain independent and identically distributed (i.i.d.) samples at precision $varepsilon$ with a cost that is $m2 d log (1/varepsilon)$
Our results suggest that as the true solution gets smoother, we can circumvent the curse of dimensionality without requiring any sort of convexity.
arXiv Detail & Related papers (2023-03-30T02:50:49Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Underdetermined Dyson-Schwinger equations [0.0]
The paper examines the effectiveness of the Dyson-Schwinger equations as a calculational tool in quantum field theory.
The truncated DS equations give a sequence of approximants that converge slowly to a limiting value.
More sophisticated truncation schemes based on mean-field-like approximations do not fix this formidable calculational problem.
arXiv Detail & Related papers (2022-11-23T15:28:34Z) - Asymptotic Theory of $\ell_1$-Regularized PDE Identification from a
Single Noisy Trajectory [2.0299248281970956]
We prove the support recovery for a general class of linear and nonlinear evolutionary partial differential equation (PDE) identification from a single noisy trajectory.
We provide a set of sufficient conditions which guarantee that, from a single trajectory data denoised by a Local-Polynomial filter, the support of $mathbfc(lambda)$ally converges to the true signed-support associated with the underlying PDE.
arXiv Detail & Related papers (2021-03-12T02:23:04Z) - Learning elliptic partial differential equations with randomized linear
algebra [2.538209532048867]
We show that one can construct an approximant to $G$ that converges almost surely.
The quantity $0Gamma_epsilonleq 1$ characterizes the quality of the training dataset.
arXiv Detail & Related papers (2021-01-31T16:57:59Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.