TraceGrad: a Framework Learning Expressive SO(3)-equivariant Non-linear Representations for Electronic-Structure Hamiltonian Prediction
- URL: http://arxiv.org/abs/2405.05722v5
- Date: Fri, 31 Jan 2025 09:18:55 GMT
- Title: TraceGrad: a Framework Learning Expressive SO(3)-equivariant Non-linear Representations for Electronic-Structure Hamiltonian Prediction
- Authors: Shi Yin, Xinyang Pan, Fengyan Wang, Lixin He,
- Abstract summary: We propose a framework to combine strong non-linear expressiveness with strict SO(3)-equivariant in prediction of the electronic-structure Hamiltonian.
Our method achieves state-of-the-art performance in prediction accuracy across eight challenging benchmark databases on Hamiltonian prediction.
- Score: 1.8982950873008362
- License:
- Abstract: We propose a framework to combine strong non-linear expressiveness with strict SO(3)-equivariance in prediction of the electronic-structure Hamiltonian, by exploring the mathematical relationships between SO(3)-invariant and SO(3)-equivariant quantities and their representations. The proposed framework, called TraceGrad, first constructs theoretical SO(3)-invariant trace quantities derived from the Hamiltonian targets, and use these invariant quantities as supervisory labels to guide the learning of high-quality SO(3)-invariant features. Given that SO(3)-invariance is preserved under non-linear operations, the learning of invariant features can extensively utilize non-linear mappings, thereby fully capturing the non-linear patterns inherent in physical systems. Building on this, we propose a gradient-based mechanism to induce SO(3)-equivariant encodings of various degrees from the learned SO(3)-invariant features. This mechanism can incorporate powerful non-linear expressive capabilities into SO(3)-equivariant features with consistency of physical dimensions to the regression targets, while theoretically preserving equivariant properties, establishing a strong foundation for predicting Hamiltonian. Our method achieves state-of-the-art performance in prediction accuracy across eight challenging benchmark databases on Hamiltonian prediction. Experimental results demonstrate that this approach not only improves the accuracy of Hamiltonian prediction but also significantly enhances the prediction for downstream physical quantities, and also markedly improves the acceleration performance for the traditional Density Functional Theory algorithms.
Related papers
- Efficient and Scalable Density Functional Theory Hamiltonian Prediction through Adaptive Sparsity [11.415146682472127]
We introduce an efficient and scalable equivariant network that incorporates adaptive sparsity into Hamiltonian prediction.
We develop a Three-phase Sparsity Scheduler, ensuring stable convergence and achieving high performance at sparsity rates of up to 70 percent.
Beyond Hamiltonian prediction, the proposed sparsification techniques also hold significant potential for improving the efficiency and scalability of other SE(3) equivariant networks.
arXiv Detail & Related papers (2025-02-03T09:04:47Z) - Harmonizing SO(3)-Equivariance with Neural Expressiveness: a Hybrid Deep Learning Framework Oriented to the Prediction of Electronic Structure Hamiltonian [36.13416266854978]
HarmoSE is a two-stage cascaded regression framework for deep learning.
First stage predicts Hamiltonians with abundant SO(3)-equivariant features extracted.
Second stage refines the first stage's output as a fine-grained prediction of Hamiltonians.
arXiv Detail & Related papers (2024-01-01T12:57:15Z) - Stress representations for tensor basis neural networks: alternative
formulations to Finger-Rivlin-Ericksen [0.0]
We survey a variety of tensor neural network models for modeling hyperelastic deformation materials in a finite context.
We compare potential-based and coefficient-based approaches, as well as different calibration techniques.
Nine variants are tested against both noisy and noiseless datasets for three different materials.
arXiv Detail & Related papers (2023-08-21T23:28:26Z) - Data-driven Nonlinear Parametric Model Order Reduction Framework using
Deep Hierarchical Variational Autoencoder [5.521324490427243]
Data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed.
LSH-VAE is capable of performing nonlinear MOR for the parametric of a nonlinear dynamic system with a significant number of degrees of freedom.
arXiv Detail & Related papers (2023-07-10T02:44:53Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Learning Partially Known Stochastic Dynamics with Empirical PAC Bayes [12.44342023476206]
This paper presents a recipe to improve the prediction accuracy of such models in three steps.
We observe in our experiments that this recipe effectively translates partial and noisy prior knowledge into an improved model fit.
arXiv Detail & Related papers (2020-06-17T14:47:06Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.