From Monte Carlo to neural networks approximations of boundary value
problems
- URL: http://arxiv.org/abs/2209.01432v2
- Date: Mon, 4 Dec 2023 17:51:53 GMT
- Title: From Monte Carlo to neural networks approximations of boundary value
problems
- Authors: Lucian Beznea, Iulian Cimpean, Oana Lupascu-Stamate, Ionel Popescu,
Arghir Zarnescu
- Abstract summary: We show that the solution to the Poisson equation can be numerically approximated in the sup-normy by Monte Carlo methods.
We also show that the obtained Monte Carlo renders in a constructive way.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we study probabilistic and neural network approximations for
solutions to Poisson equation subject to H\" older data in general bounded
domains of $\mathbb{R}^d$. We aim at two fundamental goals.
The first, and the most important, we show that the solution to Poisson
equation can be numerically approximated in the sup-norm by Monte Carlo
methods, { and that this can be done highly efficiently if we use a modified
version} of the walk on spheres algorithm { as an acceleration method. This
provides estimates which are efficient with respect to the prescribed
approximation error and with polynomial complexity in the dimension and the
reciprocal of the error.} {A crucial feature is that} the overall number of
samples does not not depend on the point at which the approximation is
performed.
As a second goal, we show that the obtained Monte Carlo solver renders { in a
constructive way} ReLU deep neural network (DNN) solutions to Poisson problem,
whose sizes depend at most polynomialy in the dimension $d$ and in the desired
error. In fact we show that the random DNN provides with high probability a
small approximation error and low polynomial complexity in the dimension.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimiax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - A multiobjective continuation method to compute the regularization path of deep neural networks [1.3654846342364308]
Sparsity is a highly feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models, and robustness.
We present an algorithm that allows for the entire sparse front for the above-mentioned objectives in a very efficient manner for high-dimensional gradients with millions of parameters.
We demonstrate that knowledge of the regularization path allows for a well-generalizing network parametrization.
arXiv Detail & Related papers (2023-08-23T10:08:52Z) - Advancing Algorithm to Scale and Accurately Solve Quantum Poisson
Equation on Near-term Quantum Hardware [0.0]
We present an advanced quantum algorithm for solving the Poisson equation with high accuracy and dynamically tunable problem size.
Particularly, in this work we present an advanced circuit that ensures the accuracy of the solution by implementing non-truncated eigenvalues.
We show that our algorithm not only increases the accuracy of the solutions but also composes more practical and scalable circuits.
arXiv Detail & Related papers (2022-10-29T18:50:40Z) - Last iterate convergence of SGD for Least-Squares in the Interpolation
regime [19.05750582096579]
We study the noiseless model in the fundamental least-squares setup.
We assume that an optimum predictor fits perfectly inputs and outputs $langle theta_*, phi(X) rangle = Y$, where $phi(X)$ stands for a possibly infinite dimensional non-linear feature map.
arXiv Detail & Related papers (2021-02-05T14:02:20Z) - Multiscale regression on unknown manifolds [13.752772802705978]
We construct low-dimensional coordinates on $mathcalM$ at multiple scales and perform multiscale regression by local fitting.
We analyze the generalization error of our method by proving finite sample bounds in high probability on rich classes of priors.
Our algorithm has quasilinear complexity in the sample size, with constants linear in $D$ and exponential in $d$.
arXiv Detail & Related papers (2021-01-13T15:14:31Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.