Efficient Prediction of SO(3)-Equivariant Hamiltonian Matrices via SO(2) Local Frames
- URL: http://arxiv.org/abs/2506.09398v1
- Date: Wed, 11 Jun 2025 05:04:29 GMT
- Title: Efficient Prediction of SO(3)-Equivariant Hamiltonian Matrices via SO(2) Local Frames
- Authors: Haiyang Yu, Yuchao Lin, Xuan Zhang, Xiaofeng Qian, Shuiwang Ji,
- Abstract summary: We consider the task of predicting Hamiltonian matrices to accelerate electronic structure calculations.<n>Motivated by the inherent relationship between the off-diagonal blocks of the Hamiltonian matrix and the SO(2) local frame, we propose QHNetV2.
- Score: 59.87385171177885
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the task of predicting Hamiltonian matrices to accelerate electronic structure calculations, which plays an important role in physics, chemistry, and materials science. Motivated by the inherent relationship between the off-diagonal blocks of the Hamiltonian matrix and the SO(2) local frame, we propose a novel and efficient network, called QHNetV2, that achieves global SO(3) equivariance without the costly SO(3) Clebsch-Gordan tensor products. This is achieved by introducing a set of new efficient and powerful SO(2)-equivariant operations and performing all off-diagonal feature updates and message passing within SO(2) local frames, thereby eliminating the need of SO(3) tensor products. Moreover, a continuous SO(2) tensor product is performed within the SO(2) local frame at each node to fuse node features, mimicking the symmetric contraction operation. Extensive experiments on the large QH9 and MD17 datasets demonstrate that our model achieves superior performance across a wide range of molecular structures and trajectories, highlighting its strong generalization capability. The proposed SO(2) operations on SO(2) local frames offer a promising direction for scalable and symmetry-aware learning of electronic structures. Our code will be released as part of the AIRS library https://github.com/divelab/AIRS.
Related papers
- Optimal Symbolic Construction of Matrix Product Operators and Tree Tensor Network Operators [0.0]
This research introduces an improved framework for constructing matrix product operators (MPOs) and tree tensor network operators (TTNOs)<n>A given (Hamiltonian) operator typically has a known symbolic "sum of operator strings" form that can be translated into a tensor network structure.
arXiv Detail & Related papers (2025-02-25T20:33:30Z) - Efficient and Scalable Density Functional Theory Hamiltonian Prediction through Adaptive Sparsity [11.415146682472127]
Hamiltonian matrix prediction is pivotal in computational chemistry.<n>SPHNet is an efficient and scalable equivariant network that incorporates adaptive SParsity into Hamiltonian prediction.<n>SPHNet achieves state-of-the-art accuracy while providing up to a 7x speedup over existing models.
arXiv Detail & Related papers (2025-02-03T09:04:47Z) - Incorporating Arbitrary Matrix Group Equivariance into KANs [69.30866522377694]
We propose Equivariant Kolmogorov-Arnold Networks (EKAN), a method for incorporating arbitrary matrix group equivariants into KANs.<n>EKAN achieves higher accuracy with smaller datasets or fewer parameters on symmetry-related tasks, such as particle scattering and the three-body problem.
arXiv Detail & Related papers (2024-10-01T06:34:58Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - TraceGrad: a Framework Learning Expressive SO(3)-equivariant Non-linear Representations for Electronic-Structure Hamiltonian Prediction [1.8982950873008362]
We propose a framework to combine strong non-linear expressiveness with strict SO(3)-equivariant in prediction of the electronic-structure Hamiltonian.<n>Our method achieves state-of-the-art performance in prediction accuracy across eight challenging benchmark databases on Hamiltonian prediction.
arXiv Detail & Related papers (2024-05-09T12:34:45Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Rethinking SO(3)-equivariance with Bilinear Tensor Networks [0.0]
We show that by judicious symmetry breaking, we can efficiently increase the expressiveness of a network operating only on vector and order-2 tensor representations of SO$(2)$.
We demonstrate the method on an important problem from High Energy Physics known as textitb-tagging, where particle jets originating from b-meson decays must be discriminated from an overwhelming QCD background.
arXiv Detail & Related papers (2023-03-20T17:23:15Z) - Tensor Factorized Hamiltonian Downfolding To Optimize The Scaling Complexity Of The Electronic Correlations Problem on Classical and Quantum Computers [0.3613661942047476]
We introduce tensor-factorized Hamiltonian downfolding (TFHD) and its quantum analogue, qubitized downfolding (QD)<n>TFHD collapses every high-rank object to rank-2 networks executed in depth-optimal, block-encoded circuits.<n>We demonstrate super-quadratic speedups of expensive quantum chemistry algorithms on both classical and quantum computers.
arXiv Detail & Related papers (2023-03-13T12:15:54Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.