Active learning of effective Hamiltonian for super-large-scale atomic structures
- URL: http://arxiv.org/abs/2307.08929v3
- Date: Wed, 15 May 2024 03:46:41 GMT
- Title: Active learning of effective Hamiltonian for super-large-scale atomic structures
- Authors: Xingyue Ma, Hongying Chen, Ri He, Zhanbo Yu, Sergei Prokhorenko, Zheng Wen, Zhicheng Zhong, Jorge IƱiguez, L. Bellaiche, Di Wu, Yurong Yang,
- Abstract summary: First-principles-based effective Hamiltonian scheme provides one of the most accurate modeling technique for large-scale structures.
We propose a general form of effective Hamiltonian and develop an active machine learning approach to parameterize the effective Hamiltonian.
This machine learning approach provides a universal and automatic way to compute the effective Hamiltonian parameters for any considered complex systems.
- Score: 7.990872447057747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The first-principles-based effective Hamiltonian scheme provides one of the most accurate modeling technique for large-scale structures, especially for ferroelectrics. However, the parameterization of the effective Hamiltonian is complicated and can be difficult for some complex systems such as high-entropy perovskites. Here, we propose a general form of effective Hamiltonian and develop an active machine learning approach to parameterize the effective Hamiltonian based on Bayesian linear regression. The parameterization is employed in molecular dynamics simulations with the prediction of energy, forces, stress and their uncertainties at each step, which decides whether first-principles calculations are executed to retrain the parameters. Structures of BaTiO$_3$, Pb(Zr$_{0.75}$Ti$_{0.25}$)O$_3$ and (Pb,Sr)TiO$_3$ system are taken as examples to show the accuracy of this approach, as compared with conventional parametrization method and experiments. This machine learning approach provides a universal and automatic way to compute the effective Hamiltonian parameters for any considered complex systems with super-large-scale (more than $10^7$ atoms) atomic structures.
Related papers
- Optimal and Robust In-situ Quantum Hamiltonian Learning through Parallelization [5.2946736439833595]
Hamiltonian learning is a cornerstone for advancing accurate many-body simulations, improving quantum device performance, and enabling quantum-enhanced sensing.<n>We present the first Hamiltonian learning algorithm that both Cramer-Rao lower bound saturated optimal precision and robustness to realistic noise.
arXiv Detail & Related papers (2025-10-09T05:58:37Z) - Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials [2.821973780014264]
We contribute on both the methodology and dataset sides to advance universal deep learning paradigm for Hamiltonian prediction.<n>NextHAM is a neural E(3)-symmetry and expressive correction method for efficient and generalizable materials electronic-structure Hamiltonian prediction.<n> Experimental results on Materials-HAM-SOC demonstrate that NextHAM achieves excellent accuracy and efficiency in predicting Hamiltonians and band structures.
arXiv Detail & Related papers (2025-09-24T08:30:58Z) - Exponential Quantum Speedup for Simulating Classical Lattice Dynamics [0.0]
We introduce a rigorous quantum framework for simulating general harmonic lattice dynamics.
We exploit well established quantum Hamiltonian simulation techniques.
We demonstrate the applicability of the method across a broad class of lattice models.
arXiv Detail & Related papers (2025-04-07T19:41:22Z) - Machine learning Hubbard parameters with equivariant neural networks [0.0]
We present a machine learning model based on equivariant neural networks.
We target here the prediction of Hubbard parameters computed self-consistently with iterative linear-response calculations.
Our model achieves mean absolute relative errors of 3% and 5% for Hubbard $U$ and $V$ parameters, respectively.
arXiv Detail & Related papers (2024-06-04T16:21:24Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - A Size-Consistent Wave-function Ansatz Built from Statistical Analysis
of Orbital Occupations [0.0]
We present a fresh approach to wavefunction parametrization that is size-consistent, rapidly convergent, and numerically robust.
The general utility of this approach is verified by applying it to uncorrelated, weakly-correlated, and strongly-correlated systems.
arXiv Detail & Related papers (2023-04-20T17:30:06Z) - Sample-efficient Model-based Reinforcement Learning for Quantum Control [0.2999888908665658]
We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization.
We show an order of magnitude advantage in the sample complexity of our method over standard model-free RL.
Our algorithm is well suited for controlling partially characterised one and two qubit systems.
arXiv Detail & Related papers (2023-04-19T15:05:19Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Effective Hamiltonian approach to the exact dynamics of open system by complex discretization approximation for environment [0.0]
This paper proposes a noval generalization of the discretization approximation method in the complex plane using complex Gauss quadratures.
The effective Hamiltonian can be constructed by this way, which is non-Hermitian and demonstrates the complex energy modes with negative imaginary part.
arXiv Detail & Related papers (2023-03-12T05:34:29Z) - Hybridized Methods for Quantum Simulation in the Interaction Picture [69.02115180674885]
We provide a framework that allows different simulation methods to be hybridized and thereby improve performance for interaction picture simulations.
Physical applications of these hybridized methods yield a gate complexity scaling as $log2 Lambda$ in the electric cutoff.
For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter $lambda$ used to impose an energy cost.
arXiv Detail & Related papers (2021-09-07T20:01:22Z) - Optimal radial basis for density-based atomic representations [58.720142291102135]
We discuss how to build an adaptive, optimal numerical basis that is chosen to represent most efficiently the structural diversity of the dataset at hand.
For each training dataset, this optimal basis is unique, and can be computed at no additional cost with respect to the primitive basis.
We demonstrate that this construction yields representations that are accurate and computationally efficient.
arXiv Detail & Related papers (2021-05-18T17:57:08Z) - Circuit quantum electrodynamics (cQED) with modular quasi-lumped models [0.23624125155742057]
Method partitions a quantum device into compact lumped or quasi-distributed cells.
We experimentally validate the method on large-scale, state-of-the-art superconducting quantum processors.
arXiv Detail & Related papers (2021-03-18T16:03:37Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.