Quantized SO(3)-Equivariant Graph Neural Networks for Efficient Molecular Property Prediction
- URL: http://arxiv.org/abs/2601.02213v1
- Date: Mon, 05 Jan 2026 15:36:04 GMT
- Title: Quantized SO(3)-Equivariant Graph Neural Networks for Efficient Molecular Property Prediction
- Authors: Haoyu Zhou, Ping Xue, Tianfan Fu, Hao Zhang,
- Abstract summary: This paper addresses the problem by compressing and accelerating an SO(3)-equivariant GNN using low-bit quantization techniques.<n>Experiments on the QM9 and rMD17 molecular benchmarks demonstrate that our 8-bit models achieve accuracy on energy and force predictions comparable to full-precision baselines.<n>The proposed techniques enable the deployment of symmetry-aware GNNs in practical chemistry applications with 2.37--2.73x faster inference and 4x smaller model size.
- Score: 12.753341915660073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deploying 3D graph neural networks (GNNs) that are equivariant to 3D rotations (the group SO(3)) on edge devices is challenging due to their high computational cost. This paper addresses the problem by compressing and accelerating an SO(3)-equivariant GNN using low-bit quantization techniques. Specifically, we introduce three innovations for quantized equivariant transformers: (1) a magnitude-direction decoupled quantization scheme that separately quantizes the norm and orientation of equivariant (vector) features, (2) a branch-separated quantization-aware training strategy that treats invariant and equivariant feature channels differently in an attention-based $SO(3)$-GNN, and (3) a robustness-enhancing attention normalization mechanism that stabilizes low-precision attention computations. Experiments on the QM9 and rMD17 molecular benchmarks demonstrate that our 8-bit models achieve accuracy on energy and force predictions comparable to full-precision baselines with markedly improved efficiency. We also conduct ablation studies to quantify the contribution of each component to maintain accuracy and equivariance under quantization, using the Local error of equivariance (LEE) metric. The proposed techniques enable the deployment of symmetry-aware GNNs in practical chemistry applications with 2.37--2.73x faster inference and 4x smaller model size, without sacrificing accuracy or physical symmetry.
Related papers
- Preserving Continuous Symmetry in Discrete Spaces: Geometric-Aware Quantization for SO(3)-Equivariant GNNs [12.753341915660073]
We propose a Geometric-Aware Quantization (GAQ) framework that compresses and accelerates equivariant models.<n>On consumer hardware, GAQ achieves 2.39x inference speedup and 4x memory reduction, enabling stable, energy-conserving molecular dynamics simulations.
arXiv Detail & Related papers (2026-03-05T16:20:21Z) - Tail-Aware Post-Training Quantization for 3D Geometry Models [58.79500829118265]
Post-Training Quantization (PTQ) enables efficient inference without retraining.<n>PTQ fails to transfer effectively to 3D models due to intricate feature distributions and prohibitive calibration overhead.<n>We propose TAPTQ, a Tail-Aware Post-Training Quantization pipeline for 3D geometric learning.
arXiv Detail & Related papers (2026-02-02T07:21:15Z) - Rotational Sampling: A Plug-and-Play Encoder for Rotation-Invariant 3D Molecular GNNs [5.558678875187018]
Graph neural networks (GNNs) have achieved remarkable success in molecular property prediction.<n>Traditional graph representations struggle to effectively encode the inherent 3D spatial structures of molecules.<n>This paper proposes a novel plug-and-play 3D encoding module leveraging rotational sampling.
arXiv Detail & Related papers (2025-07-01T08:58:12Z) - Efficient Prediction of SO(3)-Equivariant Hamiltonian Matrices via SO(2) Local Frames [49.1851978742043]
We consider the task of predicting Hamiltonian matrices to accelerate electronic structure calculations.<n>Motivated by the inherent relationship between the off-diagonal blocks of the Hamiltonian matrix and the SO(2) local frame, we propose QHNetV2.
arXiv Detail & Related papers (2025-06-11T05:04:29Z) - Rao-Blackwell Gradient Estimators for Equivariant Denoising Diffusion [55.95767828747407]
In domains such as molecular and protein generation, physical systems exhibit inherent symmetries that are critical to model.<n>We present a framework that reduces training variance and provides a provably lower-variance gradient estimator.<n>We also present a practical implementation of this estimator incorporating the loss and sampling procedure through a method we call Orbit Diffusion.
arXiv Detail & Related papers (2025-02-14T03:26:57Z) - Efficient and Scalable Density Functional Theory Hamiltonian Prediction through Adaptive Sparsity [11.415146682472127]
Hamiltonian matrix prediction is pivotal in computational chemistry.<n>SPHNet is an efficient and scalable equivariant network that incorporates adaptive SParsity into Hamiltonian prediction.<n>SPHNet achieves state-of-the-art accuracy while providing up to a 7x speedup over existing models.
arXiv Detail & Related papers (2025-02-03T09:04:47Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Mixed Precision of Quantization of Transformer Language Models for
Speech Recognition [67.95996816744251]
State-of-the-art neural language models represented by Transformers are becoming increasingly complex and expensive for practical applications.
Current low-bit quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of the system to quantization errors.
The optimal local precision settings are automatically learned using two techniques.
Experiments conducted on Penn Treebank (PTB) and a Switchboard corpus trained LF-MMI TDNN system.
arXiv Detail & Related papers (2021-11-29T09:57:00Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Group Convolutional Neural Networks Improve Quantum State Accuracy [1.52292571922932]
We show how to create maximally expressive models for quantum states with specific symmetry properties.
We implement group equivariant convolutional networks (G-CNN) citecohen2016group, and demonstrate that performance improvements can be achieved without increasing memory use.
arXiv Detail & Related papers (2021-04-11T19:45:10Z) - AUSN: Approximately Uniform Quantization by Adaptively Superimposing
Non-uniform Distribution for Deep Neural Networks [0.7378164273177589]
Existing uniform and non-uniform quantization methods exhibit an inherent conflict between the representing range and representing resolution.
We propose a novel quantization method to quantize the weight and activation.
The key idea is to Approximate the Uniform quantization by Adaptively Superposing multiple Non-uniform quantized values, namely AUSN.
arXiv Detail & Related papers (2020-07-08T05:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.