Quantum perturbation theory using Tensor cores and a deep neural network
- URL: http://arxiv.org/abs/2203.09621v2
- Date: Tue, 10 May 2022 14:21:22 GMT
- Title: Quantum perturbation theory using Tensor cores and a deep neural network
- Authors: Joshua Finkelstein, Emanuel H. Rubensson, Susan M. Mniszewski,
Christian F. A. Negre, Anders M. N. Niklasson
- Abstract summary: Time-independent quantum response calculations are performed using floating cores.
We demonstrate a peak performance of almost 200 Tflops using the cores of two Nvidia A100 GPUs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time-independent quantum response calculations are performed using Tensor
cores. This is achieved by mapping density matrix perturbation theory onto the
computational structure of a deep neural network. The main computational cost
of each deep layer is dominated by tensor contractions, i.e. dense
matrix-matrix multiplications, in mixed precision arithmetics which achieves
close to peak performance. Quantum response calculations are demonstrated and
analyzed using self-consistent charge density-functional tight-binding theory
as well as coupled-perturbed Hartree-Fock theory. For linear response
calculations, a novel parameter-free convergence criterion is presented that is
well-suited for numerically noisy low precision floating point operations and
we demonstrate a peak performance of almost 200 Tflops using the Tensor cores
of two Nvidia A100 GPUs.
Related papers
- Simulating NMR Spectra with a Quantum Computer [49.1574468325115]
This paper provides a formalization of the complete procedure of the simulation of a spin system's NMR spectrum.
We also explain how to diagonalize the Hamiltonian matrix with a quantum computer, thus enhancing the overall process's performance.
arXiv Detail & Related papers (2024-10-28T08:43:40Z) - Susceptibility Formulation of Density Matrix Perturbation Theory [0.0]
Density matrix perturbation theory provides a computationally efficient framework for time-independent response calculations.
We show an alternative, it dual formulation, where we instead calculate the static susceptibility of an observable.
arXiv Detail & Related papers (2024-09-25T15:34:21Z) - Implementation of the Density-functional Theory on Quantum Computers
with Linear Scaling with respect to the Number of Atoms [1.4502611532302039]
Density-functional theory (DFT) has revolutionized computer simulations in chemistry and material science.
A faithful implementation of the theory requires self-consistent calculations.
This article presents a quantum algorithm that has a linear scaling with respect to the number of atoms.
arXiv Detail & Related papers (2023-07-13T21:17:58Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Numerical Simulations of Noisy Quantum Circuits for Computational
Chemistry [51.827942608832025]
Near-term quantum computers can calculate the ground-state properties of small molecules.
We show how the structure of the computational ansatz as well as the errors induced by device noise affect the calculation.
arXiv Detail & Related papers (2021-12-31T16:33:10Z) - Variational Adiabatic Gauge Transformation on real quantum hardware for
effective low-energy Hamiltonians and accurate diagonalization [68.8204255655161]
We introduce the Variational Adiabatic Gauge Transformation (VAGT)
VAGT is a non-perturbative hybrid quantum algorithm that can use nowadays quantum computers to learn the variational parameters of the unitary circuit.
The accuracy of VAGT is tested trough numerical simulations, as well as simulations on Rigetti and IonQ quantum computers.
arXiv Detail & Related papers (2021-11-16T20:50:08Z) - Quantum-based Molecular Dynamics Simulations Using Tensor Cores [2.3551989288556774]
We show how tensor cores can be applied with high efficiency to the Born-Oppenheimer molecular dynamics problem.
The interatomic forces are calculated on-the-fly from an electronic structure that is obtained from a generalized deep neural network.
A canonical ensemble simulation scheme is also presented, where the additional numerical noise in the calculated forces is absorbed into a Langevin-like dynamics.
arXiv Detail & Related papers (2021-07-06T17:11:45Z) - Mixed Precision Fermi-Operator Expansion on Tensor Cores From a Machine
Learning Perspective [0.20011494166747584]
A performance of over 100 teraFLOPs is achieved for half-precision floating point operations on Nvidia's A100 tensor core units.
A differentiable deep neural network structure is formulated to solve the quantum mechanical electronic structure problem.
arXiv Detail & Related papers (2021-01-16T06:55:20Z) - Efficient construction of tensor-network representations of many-body
Gaussian states [59.94347858883343]
We present a procedure to construct tensor-network representations of many-body Gaussian states efficiently and with a controllable error.
These states include the ground and thermal states of bosonic and fermionic quadratic Hamiltonians, which are essential in the study of quantum many-body systems.
arXiv Detail & Related papers (2020-08-12T11:30:23Z) - Simple heuristics for efficient parallel tensor contraction and quantum
circuit simulation [1.4416132811087747]
We propose a parallel algorithm for the contraction of tensor networks using probabilistic models.
We apply the resulting algorithm to the simulation of random quantum circuits.
arXiv Detail & Related papers (2020-04-22T23:00:42Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.