Solving the Discretised Neutron Diffusion Equations using Neural
Networks
- URL: http://arxiv.org/abs/2301.09939v1
- Date: Tue, 24 Jan 2023 11:46:09 GMT
- Title: Solving the Discretised Neutron Diffusion Equations using Neural
Networks
- Authors: T. R. F. Phillips, C. E. Heaney, C. Boyang, A. G. Buchan, C. C. Pain
- Abstract summary: We describe how to represent numerical discretisations arising from the finite volume and finite element methods.
As the weights are defined by the discretisation scheme, no training of the network is required.
We show how to implement the Jacobi method and a multigrid solver using the functions available in AI libraries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a new approach which uses the tools within Artificial
Intelligence (AI) software libraries as an alternative way of solving partial
differential equations (PDEs) that have been discretised using standard
numerical methods. In particular, we describe how to represent numerical
discretisations arising from the finite volume and finite element methods by
pre-determining the weights of convolutional layers within a neural network. As
the weights are defined by the discretisation scheme, no training of the
network is required and the solutions obtained are identical (accounting for
solver tolerances) to those obtained with standard codes often written in
Fortran or C++. We also explain how to implement the Jacobi method and a
multigrid solver using the functions available in AI libraries. For the latter,
we use a U-Net architecture which is able to represent a sawtooth multigrid
method. A benefit of using AI libraries in this way is that one can exploit
their power and their built-in technologies. For example, their executions are
already optimised for different computer architectures, whether it be CPUs,
GPUs or new-generation AI processors. In this article, we apply the proposed
approach to eigenvalue problems in reactor physics where neutron transport is
described by diffusion theory. For a fuel assembly benchmark, we demonstrate
that the solution obtained from our new approach is the same (accounting for
solver tolerances) as that obtained from the same discretisation coded in a
standard way using Fortran. We then proceed to solve a reactor core benchmark
using the new approach.
Related papers
- Using AI libraries for Incompressible Computational Fluid Dynamics [0.7734726150561089]
We present a novel methodology to bring the power of both AI software and hardware into the field of numerical modelling.
We use the proposed methodology to solve the advection-diffusion equation, the non-linear Burgers equation and incompressible flow past a bluff body.
arXiv Detail & Related papers (2024-02-27T22:00:50Z) - Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries [0.6299766708197884]
This paper solves the discretised multiphase flow equations using tools and methods from machine-learning libraries.
For the first time, finite element discretisations of multiphase flows can be solved using an approach based on (untrained) convolutional neural networks.
arXiv Detail & Related papers (2024-01-12T18:42:42Z) - Solving the Discretised Boltzmann Transport Equations using Neural
Networks: Applications in Neutron Transport [0.0]
We solve the Boltzmann transport equation using AI libraries.
The reason why this is attractive is because it enables one to use the highly optimised software within AI libraries.
arXiv Detail & Related papers (2023-01-24T13:37:50Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.