Differentiable Programming for Computational Plasma Physics
- URL: http://arxiv.org/abs/2410.11161v1
- Date: Tue, 15 Oct 2024 00:56:35 GMT
- Title: Differentiable Programming for Computational Plasma Physics
- Authors: Nick McGreivy,
- Abstract summary: This thesis explores two applications of differentiable programming to computational plasma physics.
First, we consider how differentiable programming can be used to simplify and improve stellarator optimization.
Second, we explore how machine learning can be used to improve or replace the numerical methods used to solve partial differential equations.
- Score: 0.8702432681310401
- License:
- Abstract: Differentiable programming allows for derivatives of functions implemented via computer code to be calculated automatically. These derivatives are calculated using automatic differentiation (AD). This thesis explores two applications of differentiable programming to computational plasma physics. First, we consider how differentiable programming can be used to simplify and improve stellarator optimization. We introduce a stellarator coil design code (FOCUSADD) that uses gradient-based optimization to produce stellarator coils with finite build. Because we use reverse mode AD, which can compute gradients of scalar functions with the same computational complexity as the function, FOCUSADD is simple, flexible, and efficient. We then discuss two additional applications of AD in stellarator optimization. Second, we explore how machine learning (ML) can be used to improve or replace the numerical methods used to solve partial differential equations (PDEs), focusing on time-dependent PDEs in fluid mechanics relevant to plasma physics. Differentiable programming allows neural networks and other techniques from ML to be embedded within numerical methods. This is a promising, but relatively new, research area. We focus on two basic questions. First, can we design ML-based PDE solvers that have the same guarantees of conservation, stability, and positivity that standard numerical methods do? The answer is yes; we introduce error-correcting algorithms that preserve invariants of time-dependent PDEs. Second, which types of ML-based solvers work best at solving PDEs? We perform a systematic review of the scientific literature on solving PDEs with ML. Unfortunately we discover two issues, weak baselines and reporting biases, that affect the interpretation reproducibility of a significant majority of published research. We conclude that using ML to solve PDEs is not as promising as we initially believed.
Related papers
- Partial-differential-algebraic equations of nonlinear dynamics by Physics-Informed Neural-Network: (I) Operator splitting and framework assessment [51.3422222472898]
Several forms for constructing novel physics-informed-networks (PINN) for the solution of partial-differential-algebraic equations are proposed.
Among these novel methods are the PDE forms, which evolve from the lower-level form with fewer unknown dependent variables to higher-level form with more dependent variables.
arXiv Detail & Related papers (2024-07-13T22:48:17Z) - Learning to correct spectral methods for simulating turbulent flows [6.110864131646294]
We show that a hybrid of classical numerical techniques and machine learning can offer significant improvements over either approach alone.
Specifically, we develop ML-augmented spectral solvers for three common partial differential equations of fluid dynamics.
arXiv Detail & Related papers (2022-07-01T17:13:28Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Meta-Auto-Decoder for Solving Parametric Partial Differential Equations [32.46080264991759]
Partial Differential Equations (PDEs) are ubiquitous in many disciplines of science and engineering and notoriously difficult to solve.
Our proposed approach, called Meta-Auto-Decoder (MAD), treats solving parametric PDEs as a meta-learning problem.
MAD exhibits faster convergence speed without losing the accuracy compared with other deep learning methods.
arXiv Detail & Related papers (2021-11-15T02:51:42Z) - A composable autoencoder-based iterative algorithm for accelerating
numerical simulations [0.0]
CoAE-MLSim is an unsupervised, lower-dimensional, local method that is motivated from key ideas used in commercial PDE solvers.
It is tested for a variety of complex engineering cases to demonstrate its computational speed, accuracy, scalability, and generalization across different PDE conditions.
arXiv Detail & Related papers (2021-10-07T20:22:37Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv Detail & Related papers (2021-05-31T17:45:58Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - DiscretizationNet: A Machine-Learning based solver for Navier-Stokes
Equations using Finite Volume Discretization [0.7366405857677226]
The goal of this work is to develop an ML-based PDE solver, that couples important characteristics of existing PDE solvers with Machine Learning technologies.
Our ML-solver, DiscretizationNet, employs a generative CNN-based encoder-decoder model with PDE variables as both input and output features.
A novel iterative capability is implemented during the network training to improve the stability and convergence of the ML-solver.
arXiv Detail & Related papers (2020-05-17T19:54:19Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.