Electrostatics from Laplacian Eigenbasis for Neural Network Interatomic Potentials
- URL: http://arxiv.org/abs/2505.14606v1
- Date: Tue, 20 May 2025 16:54:25 GMT
- Title: Electrostatics from Laplacian Eigenbasis for Neural Network Interatomic Potentials
- Authors: Maksim Zhdanov, Vladislav Kurenkov,
- Abstract summary: We introduce $Phi$-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework.<n>Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation.<n>We then derive an electrostatic energy term, crucial for improved total energy predictions.
- Score: 3.069335774032178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in neural network interatomic potentials have emerged as a promising research direction. However, popular deep learning models often lack auxiliary constraints grounded in physical laws, which could accelerate training and improve fidelity through physics-based regularization. In this work, we introduce $\Phi$-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework to learn electrostatic interactions in a self-supervised manner. Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation, making it possible to acquire a potential $\boldsymbol{\phi}$ and a corresponding charge density $\boldsymbol{\rho}$ linked to the learnable Laplacian eigenbasis coefficients of a given molecular graph. We then derive an electrostatic energy term, crucial for improved total energy predictions. This approach integrates seamlessly into any existing neural potential with insignificant computational overhead. Experiments on the OE62 and MD22 benchmarks confirm that models combined with $\Phi$-Module achieve robust improvements over baseline counterparts. For OE62 error reduction ranges from 4.5\% to 17.8\%, and for MD22, baseline equipped with $\Phi$-Module achieves best results on 5 out of 14 cases. Our results underscore how embedding a first-principles constraint in neural interatomic potentials can significantly improve performance while remaining hyperparameter-friendly, memory-efficient and lightweight in training. Code will be available at \href{https://github.com/dunnolab/phi-module}{dunnolab/phi-module}.
Related papers
- VAE-DNN: Energy-Efficient Trainable-by-Parts Surrogate Model For Parametric Partial Differential Equations [49.1574468325115]
We propose a trainable-by-parts surrogate model for solving forward and inverse parameterized nonlinear partial differential equations.<n>The proposed approach employs an encoder to reduce the high-dimensional input $y(bmx)$ to a lower-dimensional latent space, $bmmu_bmphi_y$.<n>A fully connected neural network is used to map $bmmu_bmphi_y$ to the latent space, $bmmu_bmphi_h$, of the P
arXiv Detail & Related papers (2025-08-05T18:37:32Z) - Efficient dataset construction using active learning and uncertainty-aware neural networks for plasma turbulent transport surrogate models [0.0]
This work demonstrates a proof-of-principle for using uncertainty-aware architectures to construct efficient datasets for surrogate model generation.<n>This strategy was applied again to the plasma turbulent transport problem within tokamak fusion plasmas, specifically the QuaLiKiz quasilinear electrostatic gyrokinetic turbulent transport code.<n>With 45 active learning iterations, moving from a small initial training set of $102$ to a final set of $104$, the resulting models reached a $F_1$ classification performance of 0.8 and a $R2$ regression performance of 0.75 on an independent test set
arXiv Detail & Related papers (2025-07-21T18:15:12Z) - FLARE: Robot Learning with Implicit World Modeling [87.81846091038676]
$textbfFLARE$ integrates predictive latent world modeling into robot policy learning.<n>$textbfFLARE$ achieves state-of-the-art performance, outperforming prior policy learning baselines by up to 26%.<n>Our results establish $textbfFLARE$ as a general and scalable approach for combining implicit world modeling with high-frequency robotic control.
arXiv Detail & Related papers (2025-05-21T15:33:27Z) - From expNN to sinNN: automatic generation of sum-of-products models for potential energy surfaces in internal coordinates using neural networks and sparse grid sampling [0.0]
This work aims to evaluate the practicality of a single-layer artificial neural network with sinusoidal activation functions for representing potential energy surfaces in sum-of-products form.<n>The fitting approach, named sinNN, is applied to modeling the PES of HONO, covering both the trans and cis isomers.<n>The sinNN PES model was able to reproduce available experimental fundamental vibrational transition energies with a root mean square error of about 17 cm-1.
arXiv Detail & Related papers (2025-04-30T07:31:32Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Dynamic, Symmetry-Preserving, and Hardware-Adaptable Circuits for Quantum Computing Many-Body States and Correlators of the Anderson Impurity Model [0.0]
Hamiltonian expectation values are shown to require $omega(N_q) N_textmeas. leq O(N_textimpN_textbath)$ symmetry-preserving, parallel measurement circuits.
Our ansatz provides a useful tool to account for electronic correlations on early fault-tolerant processors.
arXiv Detail & Related papers (2024-05-23T21:41:28Z) - Train Faster, Perform Better: Modular Adaptive Training in Over-Parameterized Models [31.960749305728488]
We introduce a novel concept dubbed modular neural tangent kernel (mNTK)
We show that the quality of a module's learning is tightly associated with its mNTK's principal eigenvalue $lambda_max$.
We propose a novel training strategy termed Modular Adaptive Training (MAT) to update those modules with their $lambda_max$ exceeding a dynamic threshold.
arXiv Detail & Related papers (2024-05-13T07:46:48Z) - Q-Newton: Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent [37.59299233291882]
We propose Q-Newton, a hybrid quantum-classical scheduler for accelerating neural network training with Newton's GD.<n>Q-Newton utilizes a streamlined scheduling module that coordinates between quantum and classical linear solvers.<n>Our evaluation showcases the potential for Q-Newton to significantly reduce the total training time compared to commonly used quantum machines.
arXiv Detail & Related papers (2024-04-30T23:55:03Z) - Accelerating superconductor discovery through tempered deep learning of
the electron-phonon spectral function [0.0]
We train a deep learning model to predict the electron-phonon spectral function, $alpha2F(omega)$.
We then incorporate domain knowledge of the site-projected phonon density states to impose inductive bias into the model's node attributes and enhance predictions.
This methodological innovation decreases the MAE to 0.18, 29 K, and 28 K, respectively yielding an MAE of 2.1 K for $T_c$.
arXiv Detail & Related papers (2024-01-29T22:44:28Z) - Efficient Approximations of Complete Interatomic Potentials for Crystal
Property Prediction [63.4049850776926]
A crystal structure consists of a minimal unit cell that is repeated infinitely in 3D space.
Current methods construct graphs by establishing edges only between nearby nodes.
We propose to model physics-principled interatomic potentials directly instead of only using distances.
arXiv Detail & Related papers (2023-06-12T07:19:01Z) - Unlocking the potential of two-point cells for energy-efficient training
of deep nets [4.544752600181175]
We show how a transformative L5PC-driven deep neural network (DNN) can effectively process large amounts of heterogeneous real-world audio-visual (AV) data.
A novel highly-distributed parallel implementation on a Xilinx UltraScale+ MPSoC device estimates energy savings up to $245759 times 50000$ $mu$J.
In a supervised learning setup, the energy-saving can potentially reach up to 1250x less (per feedforward transmission) than the baseline model.
arXiv Detail & Related papers (2022-10-24T13:33:15Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.