Efficient Parallelization of Message Passing Neural Network Potentials for Large-scale Molecular Dynamics
- URL: http://arxiv.org/abs/2505.06711v3
- Date: Sat, 07 Jun 2025 14:29:50 GMT
- Title: Efficient Parallelization of Message Passing Neural Network Potentials for Large-scale Molecular Dynamics
- Authors: Junfan Xia, Bin Jiang,
- Abstract summary: We propose an efficient parallel algorithm for MPNN models, in which additional data communication is minimized among local atoms only in each MP layer without redundant computation.<n>This approach enables massive molecular dynamics simulations on MPNN models as fast as on strictly local models for over 100 million atoms.
- Score: 4.1977795073358815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning potentials have achieved great success in accelerating atomistic simulations. Many of them relying on atom-centered local descriptors are natural for parallelization. More recent message passing neural network (MPNN) models have demonstrated their superior accuracy and become increasingly popular. However, efficiently parallelizing MPNN models across multiple nodes remains challenging, limiting their practical applications in large-scale simulations. Here, we propose an efficient parallel algorithm for MPNN models, in which additional data communication is minimized among local atoms only in each MP layer without redundant computation, thus scaling linearly with the layer number. Integrated with our recursively embedded atom neural network model, this algorithm demonstrates excellent strong scaling and weak scaling behaviors in several benchmark systems. This approach enables massive molecular dynamics simulations on MPNN models as fast as on strictly local models for over 100 million atoms, vastly extending the applicability of the MPNN potential to an unprecedented scale. This general parallelization framework can empower various MPNN models to efficiently simulate very large and complex systems.
Related papers
- Scalable Mechanistic Neural Networks for Differential Equations and Machine Learning [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.<n>We reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.<n>Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks [4.532517021515834]
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
arXiv Detail & Related papers (2023-12-28T18:48:10Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference [20.404864470321897]
We introduce NeuralMatrix, which elastically transforms the computations of entire deep neural network (DNN) models into linear matrix operations.
Experiments with both CNN and transformer-based models demonstrate the potential of NeuralMatrix to accurately and efficiently execute a wide range of DNN models.
This level of efficiency is usually only attainable with the accelerator designed for a specific neural network.
arXiv Detail & Related papers (2023-05-23T12:03:51Z) - Distributed Compressed Sparse Row Format for Spiking Neural Network
Simulation, Serialization, and Interoperability [0.48733623015338234]
We discuss a parallel extension of a widely used format for efficiently representing sparse matrices, the compressed sparse row (CSR)
We contend that organizing additional network information, such as neuron and synapse state, in alignment with its adjacency as dCSR provides a straightforward partition-based distribution of network state.
We provide a potential implementation, and put it forward for adoption within the neural computing community.
arXiv Detail & Related papers (2023-04-12T03:19:06Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Parallel Simulation of Quantum Networks with Distributed Quantum State
Management [56.24769206561207]
We identify requirements for parallel simulation of quantum networks and develop the first parallel discrete event quantum network simulator.
Our contributions include the design and development of a quantum state manager that maintains shared quantum information distributed across multiple processes.
We release the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.
arXiv Detail & Related papers (2021-11-06T16:51:17Z) - Fast and Sample-Efficient Interatomic Neural Network Potentials for
Molecules and Materials Based on Gaussian Moments [3.1829446824051195]
We present an improved NN architecture based on the previous GM-NN model.
The improved methodology is a pre-requisite for training-heavy such as active learning or learning-on-the-fly.
arXiv Detail & Related papers (2021-09-20T14:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.