Electrostatics from Laplacian Eigenbasis for Neural Network Interatomic Potentials
- URL: http://arxiv.org/abs/2505.14606v2
- Date: Wed, 15 Oct 2025 22:34:58 GMT
- Title: Electrostatics from Laplacian Eigenbasis for Neural Network Interatomic Potentials
- Authors: Maksim Zhdanov, Vladislav Kurenkov,
- Abstract summary: We introduce Phi-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework.<n>Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation.<n>We then derive an electrostatic energy term, crucial for improved total energy predictions.
- Score: 9.268742966352383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce Phi-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework to learn electrostatic interactions in a self-supervised manner. Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation, making it possible to acquire a potential {\phi} and corresponding charges \r{ho} linked to the learnable Laplacian eigenbasis coefficients of a given molecular graph. We then derive an electrostatic energy term, crucial for improved total energy predictions. This approach integrates seamlessly into any existing neural potential with insignificant computational overhead. Our results underscore how embedding a first-principles constraint in neural interatomic potentials can significantly improve performance while remaining hyperparameter-friendly, memory-efficient, and lightweight in training. Code will be available at https://github.com/dunnolab/phi-module.
Related papers
- Electron neural closure for turbulent magnetosheath simulations: energy channels [0.0]
We introduce a non-local five-moment electron pressure tensor closure parametrized by a Fully Convolutional Neural Network (FCNN)<n>This model is used in the development of a surrogate model for a fully kinetic energy-conserving semi-implicit Particle-in-Cell simulation of decaying magnetosheath turbulence.
arXiv Detail & Related papers (2025-09-30T21:00:50Z) - VAE-DNN: Energy-Efficient Trainable-by-Parts Surrogate Model For Parametric Partial Differential Equations [49.1574468325115]
We propose a trainable-by-parts surrogate model for solving forward and inverse parameterized nonlinear partial differential equations.<n>The proposed approach employs an encoder to reduce the high-dimensional input $y(bmx)$ to a lower-dimensional latent space, $bmmu_bmphi_y$.<n>A fully connected neural network is used to map $bmmu_bmphi_y$ to the latent space, $bmmu_bmphi_h$, of the P
arXiv Detail & Related papers (2025-08-05T18:37:32Z) - Efficient dataset construction using active learning and uncertainty-aware neural networks for plasma turbulent transport surrogate models [0.0]
This work demonstrates a proof-of-principle for using uncertainty-aware architectures to construct efficient datasets for surrogate model generation.<n>This strategy was applied again to the plasma turbulent transport problem within tokamak fusion plasmas, specifically the QuaLiKiz quasilinear electrostatic gyrokinetic turbulent transport code.<n>With 45 active learning iterations, moving from a small initial training set of $102$ to a final set of $104$, the resulting models reached a $F_1$ classification performance of 0.8 and a $R2$ regression performance of 0.75 on an independent test set
arXiv Detail & Related papers (2025-07-21T18:15:12Z) - FLARE: Robot Learning with Implicit World Modeling [87.81846091038676]
$textbfFLARE$ integrates predictive latent world modeling into robot policy learning.<n>$textbfFLARE$ achieves state-of-the-art performance, outperforming prior policy learning baselines by up to 26%.<n>Our results establish $textbfFLARE$ as a general and scalable approach for combining implicit world modeling with high-frequency robotic control.
arXiv Detail & Related papers (2025-05-21T15:33:27Z) - From expNN to sinNN: automatic generation of sum-of-products models for potential energy surfaces in internal coordinates using neural networks and sparse grid sampling [0.0]
This work aims to evaluate the practicality of a single-layer artificial neural network with sinusoidal activation functions for representing potential energy surfaces in sum-of-products form.<n>The fitting approach, named sinNN, is applied to modeling the PES of HONO, covering both the trans and cis isomers.<n>The sinNN PES model was able to reproduce available experimental fundamental vibrational transition energies with a root mean square error of about 17 cm-1.
arXiv Detail & Related papers (2025-04-30T07:31:32Z) - Predicting ionic conductivity in solids from the machine-learned potential energy landscape [68.25662704255433]
We propose an approach for the quick and reliable screening of ionic conductors through the analysis of a universal interatomic potential.<n>Eight out of the ten highest-ranked materials are confirmed to be superionic at room temperature in first-principles calculations.<n>Our method achieves a speed-up factor of approximately 50 compared to molecular dynamics driven by a machine-learning potential, and is at least 3,000 times faster compared to first-principles molecular dynamics.
arXiv Detail & Related papers (2024-11-11T09:01:36Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - End-to-End Reaction Field Energy Modeling via Deep Learning based Voxel-to-voxel Transform [0.8852892045299524]
We introduce PBNeF, a novel machine learning approach inspired by recent advancements in neural network-based partial differential equation solvers.
Our method formulates the input and boundary electrostatic conditions of the PB equation into a learnable voxel representation.
Experiments demonstrate that PBNeF achieves over a 100-fold speedup compared to traditional PB solvers.
arXiv Detail & Related papers (2024-10-04T21:11:17Z) - Neural Thermodynamic Integration: Free Energies from Energy-based Diffusion Models [19.871787625519513]
We propose to perform thermodynamic integration (TI) along an alchemical pathway represented by a trainable neural network.<n>In this work, we parametrize a time-dependent Hamiltonian interpolating between the interacting and non-interacting systems, and optimize its gradient.<n>The ability of the resulting energy-based diffusion model to sample all intermediate ensembles allows us to perform TI from a single reference calculation.
arXiv Detail & Related papers (2024-06-04T13:42:42Z) - Dynamic, Symmetry-Preserving, and Hardware-Adaptable Circuits for Quantum Computing Many-Body States and Correlators of the Anderson Impurity Model [0.0]
Hamiltonian expectation values are shown to require $omega(N_q) N_textmeas. leq O(N_textimpN_textbath)$ symmetry-preserving, parallel measurement circuits.
Our ansatz provides a useful tool to account for electronic correlations on early fault-tolerant processors.
arXiv Detail & Related papers (2024-05-23T21:41:28Z) - Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations [58.130170155147205]
Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost.
Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently.
This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules.
arXiv Detail & Related papers (2024-05-23T16:30:51Z) - Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized Models [31.960749305728488]
We introduce a novel concept dubbed modular neural tangent kernel (mNTK)
We show that the quality of a module's learning is tightly associated with its mNTK's principal eigenvalue $lambda_max$.
We propose a novel training strategy termed Modular Adaptive Training (MAT) to update those modules with their $lambda_max$ exceeding a dynamic threshold.
arXiv Detail & Related papers (2024-05-13T07:46:48Z) - Q-Newton: Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent [37.59299233291882]
We propose Q-Newton, a hybrid quantum-classical scheduler for accelerating neural network training with Newton's GD.<n>Q-Newton utilizes a streamlined scheduling module that coordinates between quantum and classical linear solvers.<n>Our evaluation showcases the potential for Q-Newton to significantly reduce the total training time compared to commonly used quantum machines.
arXiv Detail & Related papers (2024-04-30T23:55:03Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Accelerating superconductor discovery through tempered deep learning of
the electron-phonon spectral function [0.0]
We train a deep learning model to predict the electron-phonon spectral function, $alpha2F(omega)$.
We then incorporate domain knowledge of the site-projected phonon density states to impose inductive bias into the model's node attributes and enhance predictions.
This methodological innovation decreases the MAE to 0.18, 29 K, and 28 K, respectively yielding an MAE of 2.1 K for $T_c$.
arXiv Detail & Related papers (2024-01-29T22:44:28Z) - Machine learning of hidden variables in multiscale fluid simulation [77.34726150561087]
Solving fluid dynamics equations often requires the use of closure relations that account for missing microphysics.
In our study, a partial differential equation simulator that is end-to-end differentiable is used to train judiciously placed neural networks.
We show that this method enables an equation based approach to reproduce non-linear, large Knudsen number plasma physics.
arXiv Detail & Related papers (2023-06-19T06:02:53Z) - Efficient Approximations of Complete Interatomic Potentials for Crystal
Property Prediction [63.4049850776926]
A crystal structure consists of a minimal unit cell that is repeated infinitely in 3D space.
Current methods construct graphs by establishing edges only between nearby nodes.
We propose to model physics-principled interatomic potentials directly instead of only using distances.
arXiv Detail & Related papers (2023-06-12T07:19:01Z) - Unlocking the potential of two-point cells for energy-efficient training
of deep nets [4.544752600181175]
We show how a transformative L5PC-driven deep neural network (DNN) can effectively process large amounts of heterogeneous real-world audio-visual (AV) data.
A novel highly-distributed parallel implementation on a Xilinx UltraScale+ MPSoC device estimates energy savings up to $245759 times 50000$ $mu$J.
In a supervised learning setup, the energy-saving can potentially reach up to 1250x less (per feedforward transmission) than the baseline model.
arXiv Detail & Related papers (2022-10-24T13:33:15Z) - Gaussian Moments as Physically Inspired Molecular Descriptors for
Accurate and Scalable Machine Learning Potentials [0.0]
We propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks.
The accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models.
arXiv Detail & Related papers (2021-09-15T16:46:46Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.