MACE: Higher Order Equivariant Message Passing Neural Networks for Fast
and Accurate Force Fields
- URL: http://arxiv.org/abs/2206.07697v1
- Date: Wed, 15 Jun 2022 17:46:05 GMT
- Title: MACE: Higher Order Equivariant Message Passing Neural Networks for Fast
and Accurate Force Fields
- Authors: Ilyes Batatia, D\'avid P\'eter Kov\'acs, Gregor N. C. Simm, Christoph
Ortner, G\'abor Cs\'anyi
- Abstract summary: We introduce MACE, a new equivariant MPNN model that uses higher body order messages.
We show that using four-body messages reduces the required number of message passing iterations to just emphtwo, resulting in a fast and highly parallelizable model.
- Score: 4.812321790984494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating fast and accurate force fields is a long-standing challenge in
computational chemistry and materials science. Recently, several equivariant
message passing neural networks (MPNNs) have been shown to outperform models
built using other approaches in terms of accuracy. However, most MPNNs suffer
from high computational cost and poor scalability. We propose that these
limitations arise because MPNNs only pass two-body messages leading to a direct
relationship between the number of layers and the expressivity of the network.
In this work, we introduce MACE, a new equivariant MPNN model that uses higher
body order messages. In particular, we show that using four-body messages
reduces the required number of message passing iterations to just \emph{two},
resulting in a fast and highly parallelizable model, reaching or exceeding
state-of-the-art accuracy on the rMD17, 3BPA, and AcAc benchmark tasks. We also
demonstrate that using higher order messages leads to an improved steepness of
the learning curves.
Related papers
- Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagation [10.587369382226251]
In this paper, we propose a new inductive KG reasoning model, MStar, by leveraging conditional message passing neural networks (C-MPNNs)
Our key insight is to select multiple query-specific starting entities to expand the scope of progressive propagation.
Experimental results validate that MStar achieves superior performance compared with state-of-the-art models.
arXiv Detail & Related papers (2024-07-15T04:16:20Z) - Link Prediction with Untrained Message Passing Layers [0.716879432974126]
We study the use of various untrained message passing layers in graph neural networks.
We find that untrained message passing layers can lead to competitive and even superior performance compared to fully trained MPNNs.
arXiv Detail & Related papers (2024-06-24T14:46:34Z) - Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs [77.42221150848535]
We propose a novel message passing function called Multiset to Multiset GNN(M2M-GNN)
Our theoretical analyses and extensive experiments demonstrate that M2M-GNN effectively alleviates the aforementioned limitations of SMP, yielding superior performance in comparison.
arXiv Detail & Related papers (2024-05-31T07:39:22Z) - CIN++: Enhancing Topological Message Passing [3.584867245855462]
Graph Neural Networks (GNNs) have demonstrated remarkable success in learning from graph-structured data.
They face significant limitations in expressive power, struggling with long-range interactions and lacking a principled approach to modeling higher-order structures and group interactions.
We propose CIN++, an enhancement of the topological message passing scheme introduced in CINs.
arXiv Detail & Related papers (2023-06-06T10:25:10Z) - Provably Convergent Subgraph-wise Sampling for Fast GNN Training [63.530816506578674]
We propose a novel subgraph-wise sampling method with a convergence guarantee, namely Local Message Compensation (LMC)
LMC retrieves the discarded messages in backward passes based on a message passing formulation of backward passes.
Experiments on large-scale benchmarks demonstrate that LMC is significantly faster than state-of-the-art subgraph-wise sampling methods.
arXiv Detail & Related papers (2023-03-17T05:16:49Z) - QVIP: An ILP-based Formal Verification Approach for Quantized Neural
Networks [14.766917269393865]
Quantization has emerged as a promising technique to reduce the size of neural networks with comparable accuracy as their floating-point numbered counterparts.
We propose a novel and efficient formal verification approach for QNNs.
In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints.
arXiv Detail & Related papers (2022-12-10T03:00:29Z) - MFA: TDNN with Multi-scale Frequency-channel Attention for
Text-independent Speaker Verification with Short Utterances [94.70787497137854]
We propose a multi-scale frequency-channel attention (MFA) to characterize speakers at different scales through a novel dual-path design which consists of a convolutional neural network and TDNN.
We evaluate the proposed MFA on the VoxCeleb database and observe that the proposed framework with MFA can achieve state-of-the-art performance while reducing parameters and complexity.
arXiv Detail & Related papers (2022-02-03T14:57:05Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.