Electron neural closure for turbulent magnetosheath simulations: energy channels
- URL: http://arxiv.org/abs/2510.00282v1
- Date: Tue, 30 Sep 2025 21:00:50 GMT
- Title: Electron neural closure for turbulent magnetosheath simulations: energy channels
- Authors: George Miloshevich, Luka Vranckx, Felipe Nathan de Oliveira Lopes, Pietro Dazzi, Giuseppe ArrĂ², Giovanni Lapenta,
- Abstract summary: We introduce a non-local five-moment electron pressure tensor closure parametrized by a Fully Convolutional Neural Network (FCNN)<n>This model is used in the development of a surrogate model for a fully kinetic energy-conserving semi-implicit Particle-in-Cell simulation of decaying magnetosheath turbulence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce a non-local five-moment electron pressure tensor closure parametrized by a Fully Convolutional Neural Network (FCNN). Electron pressure plays an important role in generalized Ohm's law, competing with electron inertia. This model is used in the development of a surrogate model for a fully kinetic energy-conserving semi-implicit Particle-in-Cell simulation of decaying magnetosheath turbulence. We achieve this by training FCNN on a representative set of simulations with a smaller number of particles per cell and showing that our results generalise to a simulation with a large number of particles per cell. We evaluate the statistical properties of the learned equation of state, with a focus on pressure-strain interaction, which is crucial for understanding energy channels in turbulent plasmas. The resulting equation of state learned via FCNN significantly outperforms local closures, such as those learned by Multi-Layer Perceptron (MLP) or double adiabatic expressions. We report that the overall spatial distribution of pressure-strain and its conditional averages are reconstructed well. However, some small-scale features are missed, especially for the off-diagonal components of the pressure tensor. Nevertheless, the results are substantially improved with more training data, indicating favorable scaling and potential for improvement, which will be addressed in future work.
Related papers
- Acceleration of Atomistic NEGF: Algorithms, Parallelization, and Machine Learning [61.12861060232382]
The Non-equilibrium Green's function (NEGF) formalism is a powerful method to simulate the quantum transport properties of nanoscale devices.<n>This paper summarizes key (algorithmic) achievements that have allowed us to bring DFT+NEGF simulations closer to the dimensions and functionality of realistic systems.
arXiv Detail & Related papers (2026-02-03T12:01:39Z) - Electrostatics from Laplacian Eigenbasis for Neural Network Interatomic Potentials [9.268742966352383]
We introduce Phi-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework.<n>Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation.<n>We then derive an electrostatic energy term, crucial for improved total energy predictions.
arXiv Detail & Related papers (2025-05-20T16:54:25Z) - From expNN to sinNN: automatic generation of sum-of-products models for potential energy surfaces in internal coordinates using neural networks and sparse grid sampling [0.0]
This work aims to evaluate the practicality of a single-layer artificial neural network with sinusoidal activation functions for representing potential energy surfaces in sum-of-products form.<n>The fitting approach, named sinNN, is applied to modeling the PES of HONO, covering both the trans and cis isomers.<n>The sinNN PES model was able to reproduce available experimental fundamental vibrational transition energies with a root mean square error of about 17 cm-1.
arXiv Detail & Related papers (2025-04-30T07:31:32Z) - Optical lattice quantum simulator of dynamics beyond Born-Oppenheimer [42.21510336472284]
We propose a platform based on ultra-cold fermionic molecules trapped in optical lattices to simulate nonadiabatic effects.<n>We benchmark our proposal by studying the scattering of an electron or a proton against a hydrogen atom.
arXiv Detail & Related papers (2025-03-30T14:46:26Z) - Electron-Electron Interactions in Device Simulation via Non-equilibrium Green's Functions and the GW Approximation [71.63026504030766]
electron-electron (e-e) interactions must be explicitly incorporated in quantum transport simulation.<n>This study is the first one reporting large-scale atomistic quantum transport simulations of nano-devices under non-equilibrium conditions.
arXiv Detail & Related papers (2024-12-17T15:05:33Z) - Constructing accurate machine-learned potentials and performing highly efficient atomistic simulations to predict structural and thermal properties [6.875235178607604]
We introduce a neuroevolution potential (NEP) trained on a dataset generated from ab initio molecular dynamics (AIMD) simulations.
We calculate the phonon density of states (DOS) and radial distribution function (RDF) using both machine learning potentials.
While the MTP potential offers slightly higher accuracy, the NEP achieves a remarkable 41-fold increase in computational speed.
arXiv Detail & Related papers (2024-11-16T23:16:59Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Machine learning of hidden variables in multiscale fluid simulation [77.34726150561087]
Solving fluid dynamics equations often requires the use of closure relations that account for missing microphysics.
In our study, a partial differential equation simulator that is end-to-end differentiable is used to train judiciously placed neural networks.
We show that this method enables an equation based approach to reproduce non-linear, large Knudsen number plasma physics.
arXiv Detail & Related papers (2023-06-19T06:02:53Z) - Message-Passing Neural Quantum States for the Homogeneous Electron Gas [41.94295877935867]
We introduce a message-passing-neural-network-based wave function Ansatz to simulate extended, strongly interacting fermions in continuous space.
We demonstrate its accuracy by simulating the ground state of the homogeneous electron gas in three spatial dimensions.
arXiv Detail & Related papers (2023-05-12T04:12:04Z) - KineticNet: Deep learning a transferable kinetic energy functional for
orbital-free density functional theory [13.437597619451568]
KineticNet is an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids.
For the first time, chemical accuracy of the learned functionals is achieved across input densities and geometries of tiny molecules.
arXiv Detail & Related papers (2023-05-08T17:43:31Z) - Spin Current Density Functional Theory of the Quantum Spin-Hall Phase [59.50307752165016]
We apply the spin current density functional theory to the quantum spin-Hall phase.
We show that the explicit account of spin currents in the electron-electron potential of the SCDFT is key to the appearance of a Dirac cone.
arXiv Detail & Related papers (2022-08-29T20:46:26Z) - A2I Transformer: Permutation-equivariant attention network for pairwise
and many-body interactions with minimal featurization [0.1469945565246172]
In this work, we suggest an end-to-end model which directly predicts per-atom energy from the coordinates of particles.
We tested our model against several challenges in molecular simulation problems, including periodic boundary condition (PBC), $n$-body interaction, and binary composition.
arXiv Detail & Related papers (2021-10-27T12:18:25Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.