Solving the Discretised Boltzmann Transport Equations using Neural
Networks: Applications in Neutron Transport
- URL: http://arxiv.org/abs/2301.09991v2
- Date: Wed, 25 Jan 2023 10:59:21 GMT
- Title: Solving the Discretised Boltzmann Transport Equations using Neural
Networks: Applications in Neutron Transport
- Authors: T. R. F. Phillips, C. E. Heaney, C. Boyang, A. G. Buchan, C. C. Pain
- Abstract summary: We solve the Boltzmann transport equation using AI libraries.
The reason why this is attractive is because it enables one to use the highly optimised software within AI libraries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we solve the Boltzmann transport equation using AI libraries.
The reason why this is attractive is because it enables one to use the highly
optimised software within AI libraries, enabling one to run on different
computer architectures and enables one to tap into the vast quantity of
community based software that has been developed for AI and ML applications
e.g. mixed arithmetic precision or model parallelism. Here we take the first
steps towards developing this approach for the Boltzmann transport equation and
develop the necessary methods in order to do that effectively. This includes:
1) A space-angle multigrid solution method that can extract the level of
parallelism necessary to run efficiently on GPUs or new AI computers. 2) A new
Convolutional Finite Element Method (ConvFEM) that greatly simplifies the
implementation of high order finite elements (quadratic to quintic, say). 3) A
new non-linear Petrov-Galerkin method that introduces dissipation
anisotropically.
Related papers
- Using AI libraries for Incompressible Computational Fluid Dynamics [0.7734726150561089]
We present a novel methodology to bring the power of both AI software and hardware into the field of numerical modelling.
We use the proposed methodology to solve the advection-diffusion equation, the non-linear Burgers equation and incompressible flow past a bluff body.
arXiv Detail & Related papers (2024-02-27T22:00:50Z) - Neuromorphic quadratic programming for efficient and scalable model predictive control [0.31457219084519]
Event-based and memory-integrated neuromorphic architectures promise to solve large optimization problems.
We present a method to solve convex continuous optimization problems with quadratic cost functions and linear constraints on Intel's scalable neuromorphic research chip Loihi 2.
arXiv Detail & Related papers (2024-01-26T14:12:35Z) - Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries [0.6299766708197884]
This paper solves the discretised multiphase flow equations using tools and methods from machine-learning libraries.
For the first time, finite element discretisations of multiphase flows can be solved using an approach based on (untrained) convolutional neural networks.
arXiv Detail & Related papers (2024-01-12T18:42:42Z) - Solving Systems of Linear Equations: HHL from a Tensor Networks Perspective [39.58317527488534]
We present an algorithm for solving systems of linear equations based on the HHL algorithm with a novel qudits methodology.
We perform a quantum-inspired version on tensor networks, taking advantage of their ability to perform non-unitary operations such as projection.
arXiv Detail & Related papers (2023-09-11T08:18:41Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Solving the Discretised Neutron Diffusion Equations using Neural
Networks [0.0]
We describe how to represent numerical discretisations arising from the finite volume and finite element methods.
As the weights are defined by the discretisation scheme, no training of the network is required.
We show how to implement the Jacobi method and a multigrid solver using the functions available in AI libraries.
arXiv Detail & Related papers (2023-01-24T11:46:09Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.