Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials
- URL: http://arxiv.org/abs/2509.19877v2
- Date: Thu, 25 Sep 2025 08:01:42 GMT
- Title: Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials
- Authors: Shi Yin, Zujian Dai, Xinyang Pan, Lixin He,
- Abstract summary: We contribute on both the methodology and dataset sides to advance universal deep learning paradigm for Hamiltonian prediction.<n>NextHAM is a neural E(3)-symmetry and expressive correction method for efficient and generalizable materials electronic-structure Hamiltonian prediction.<n> Experimental results on Materials-HAM-SOC demonstrate that NextHAM achieves excellent accuracy and efficiency in predicting Hamiltonians and band structures.
- Score: 2.821973780014264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods for electronic-structure Hamiltonian prediction has offered significant computational efficiency advantages over traditional DFT methods, yet the diversity of atomic types, structural patterns, and the high-dimensional complexity of Hamiltonians pose substantial challenges to the generalization performance. In this work, we contribute on both the methodology and dataset sides to advance universal deep learning paradigm for Hamiltonian prediction. On the method side, we propose NextHAM, a neural E(3)-symmetry and expressive correction method for efficient and generalizable materials electronic-structure Hamiltonian prediction. First, we introduce the zeroth-step Hamiltonians, which can be efficiently constructed by the initial charge density of DFT, as informative descriptors of neural regression model in the input level and initial estimates of the target Hamiltonian in the output level, so that the regression model directly predicts the correction terms to the target ground truths, thereby significantly simplifying the input-output mapping for learning. Second, we present a neural Transformer architecture with strict E(3)-Symmetry and high non-linear expressiveness for Hamiltonian prediction. Third, we propose a novel training objective to ensure the accuracy performance of Hamiltonians in both real space and reciprocal space, preventing error amplification and the occurrence of "ghost states" caused by the large condition number of the overlap matrix. On the dataset side, we curate a high-quality broad-coverage large benchmark, namely Materials-HAM-SOC, comprising 17,000 material structures spanning 68 elements from six rows of the periodic table and explicitly incorporating SOC effects. Experimental results on Materials-HAM-SOC demonstrate that NextHAM achieves excellent accuracy and efficiency in predicting Hamiltonians and band structures.
Related papers
- High-order Equivariant Flow Matching for Density Functional Theory Hamiltonian Prediction [14.957565545353942]
deep learning methods are gaining attention as a way to bypass this step by directly predicting the Hamiltonian.<n>We propose QHFlow, a high-order equivariant flow matching framework that generates Hamiltonian matrices conditioned on molecular geometry.<n>Flow matching models continuous-time trajectories between simple priors and complex targets, learning the structured distributions over Hamiltonians instead of direct regression.
arXiv Detail & Related papers (2025-05-24T18:23:28Z) - Enhancing the Scalability and Applicability of Kohn-Sham Hamiltonians for Molecular Systems [11.085215676429858]
We create a scalable model for Density Functional Theory calculations with physical accuracy.<n>We show it achieves a reduction in total energy prediction error by a factor of 1347 and an SCF calculation speed-up by a factor of 18%.
arXiv Detail & Related papers (2025-02-26T15:36:25Z) - Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models [30.746062388701187]
We introduce a unified neural network architecture, the Deep Equilibrium Density Functional Theory Hamiltonian (DEQH) model.
DEQH model inherently captures the self-consistency nature of Hamiltonian.
We propose a versatile framework that combines DEQ with off-the-shelf machine learning models for predicting Hamiltonians.
arXiv Detail & Related papers (2024-06-06T07:05:58Z) - TokenUnify: Scaling Up Autoregressive Pretraining for Neuron Segmentation [65.65530016765615]
We propose a hierarchical predictive coding framework that captures multi-scale dependencies through three complementary learning objectives.<n> TokenUnify integrates random token prediction, next-token prediction, and next-all token prediction to create a comprehensive representational space.<n>We also introduce a large-scale EM dataset with 1.2 billion annotated voxels, offering ideal long-sequence visual data with spatial continuity.
arXiv Detail & Related papers (2024-05-27T05:45:51Z) - Self-Consistency Training for Density-Functional-Theory Hamiltonian Prediction [74.84850523400873]
We show that Hamiltonian prediction possesses a self-consistency principle, based on which we propose self-consistency training.
It enables the model to be trained on a large amount of unlabeled data, hence addresses the data scarcity challenge.
It is more efficient than running DFT to generate labels for supervised training, since it amortizes DFT calculation over a set of queries.
arXiv Detail & Related papers (2024-03-14T16:52:57Z) - Harmonizing SO(3)-Equivariance with Neural Expressiveness: a Hybrid Deep Learning Framework Oriented to the Prediction of Electronic Structure Hamiltonian [36.13416266854978]
HarmoSE is a two-stage cascaded regression framework for deep learning.
First stage predicts Hamiltonians with abundant SO(3)-equivariant features extracted.
Second stage refines the first stage's output as a fine-grained prediction of Hamiltonians.
arXiv Detail & Related papers (2024-01-01T12:57:15Z) - Active learning of effective Hamiltonian for super-large-scale atomic structures [7.990872447057747]
First-principles-based effective Hamiltonian scheme provides one of the most accurate modeling technique for large-scale structures.
We propose a general form of effective Hamiltonian and develop an active machine learning approach to parameterize the effective Hamiltonian.
This machine learning approach provides a universal and automatic way to compute the effective Hamiltonian parameters for any considered complex systems.
arXiv Detail & Related papers (2023-07-18T02:28:48Z) - QH9: A Quantum Hamiltonian Prediction Benchmark for QM9 Molecules [69.25826391912368]
We generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 999 or 2998 molecular dynamics trajectories.
We show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules.
arXiv Detail & Related papers (2023-06-15T23:39:07Z) - Learning Neural Hamiltonian Dynamics: A Methodological Overview [109.40968389896639]
Hamiltonian dynamics endows neural networks with accurate long-term prediction, interpretability, and data-efficient learning.
We systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies.
arXiv Detail & Related papers (2022-02-28T22:54:39Z) - SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred
from Vision [73.26414295633846]
A recently proposed class of models attempts to learn latent dynamics from high-dimensional observations.
Existing methods rely on image reconstruction quality, which does not always reflect the quality of the learnt latent dynamics.
We develop a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured.
arXiv Detail & Related papers (2021-11-10T23:26:58Z) - Graph Neural Network for Hamiltonian-Based Material Property Prediction [56.94118357003096]
We present and compare several different graph convolution networks that are able to predict the band gap for inorganic materials.
The models are developed to incorporate two different features: the information of each orbital itself and the interaction between each other.
The results show that our model can get a promising prediction accuracy with cross-validation.
arXiv Detail & Related papers (2020-05-27T13:32:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.