Adaptive neural domain refinement for solving time-dependent
differential equations
- URL: http://arxiv.org/abs/2112.12517v2
- Date: Thu, 1 Sep 2022 05:58:50 GMT
- Title: Adaptive neural domain refinement for solving time-dependent
differential equations
- Authors: Toni Schneidereit and Michael Breu{\ss}
- Abstract summary: A classic approach for solving differential equations with neural networks builds upon neural forms, which employ the differential equation with a discretisation of the solution domain.
It would be desirable to transfer such important and successful strategies to the field of neural network based solutions.
We propose a novel adaptive neural approach to meet this aim for solving time-dependent problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A classic approach for solving differential equations with neural networks
builds upon neural forms, which employ the differential equation with a
discretisation of the solution domain. Making use of neural forms for
time-dependent differential equations, one can apply the recently developed
method of domain fragmentation. That is, the domain may be split into several
subdomains, on which the optimisation problem is solved. In classic adaptive
numerical methods, the mesh as well as the domain may be refined or decomposed,
respectively, in order to improve accuracy. Also the degree of approximation
accuracy may be adapted. It would be desirable to transfer such important and
successful strategies to the field of neural network based solutions. In the
present work, we propose a novel adaptive neural approach to meet this aim for
solving time-dependent problems. To this end, each subdomain is reduced in size
until the optimisation is resolved up to a predefined training accuracy. In
addition, while the neural networks employed are by default small, we propose a
means to adjust also the number of neurons in an adaptive way. We introduce
conditions to automatically confirm the solution reliability and optimise
computational parameters whenever it is necessary. Results are provided for
several initial value problems that illustrate important computational
properties of the method alongside. In total, our approach not only allows to
analyse in high detail the relation between network error and numerical
accuracy. The new approach also allows reliable neural network solutions over
large computational domains.
Related papers
- A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - A comparison of rational and neural network based approximations [0.0]
We compare the efficiency of function approximation using rational approximation, neural network and their combinations.
It was found that rational approximation is superior to neural network based approaches with the same number of decision variables.
arXiv Detail & Related papers (2023-03-08T08:31:06Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Collocation Polynomial Neural Forms and Domain Fragmentation for solving
Initial Value Problems [0.0]
Several neural network approaches for solving differential equations employ trial solutions with a feedforward neural network.
We consider time-dependent initial value problems, which require to set up the neural form framework adequately.
We show that the combination of collocation neural forms of higher order and the domain fragmentation allows to solve initial value problems over large domains with high accuracy and reliability.
arXiv Detail & Related papers (2021-03-29T08:19:26Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Computational characteristics of feedforward neural networks for solving
a stiff differential equation [0.0]
We study the solution of a simple but fundamental stiff ordinary differential equation modelling a damped system.
We show that it is possible to identify preferable choices to be made for parameters and methods.
Overall we extend the current literature in the field by showing what can be done in order to obtain reliable and accurate results by the neural network approach.
arXiv Detail & Related papers (2020-12-03T12:22:24Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - ODEN: A Framework to Solve Ordinary Differential Equations using
Artificial Neural Networks [0.0]
We prove a specific loss function, which does not require knowledge of the exact solution, to evaluate neural networks' performance.
Neural networks are shown to be proficient at approximating continuous solutions within their training domains.
A user-friendly and adaptable open-source code (ODE$mathcalN$) is provided on GitHub.
arXiv Detail & Related papers (2020-05-28T15:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.