Operator Learning Renormalization Group
- URL: http://arxiv.org/abs/2403.03199v2
- Date: Tue, 28 May 2024 15:09:46 GMT
- Title: Operator Learning Renormalization Group
- Authors: Xiu-Zhe Luo, Di Luo, Roger G. Melko,
- Abstract summary: We present a general framework for quantum many-body simulations called the operator learning renormalization group (OLRG)
Inspired by machine learning perspectives, OLRG is a generalization of Wilson's numerical renormalization group and White's density matrix renormalization group.
We implement two versions of the operator maps for classical and quantum simulations.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we present a general framework for quantum many-body simulations called the operator learning renormalization group (OLRG). Inspired by machine learning perspectives, OLRG is a generalization of Wilson's numerical renormalization group and White's density matrix renormalization group, which recursively builds a simulatable system to approximate a target system of the same number of sites via operator maps. OLRG uses a loss function to minimize the error of a target property directly by learning the operator map in lieu of a state ansatz. This loss function is designed by a scaling consistency condition that also provides a provable bound for real-time evolution. We implement two versions of the operator maps for classical and quantum simulations. The former, which we call the Operator Matrix Map, can be implemented via neural networks on classical computers. The latter, which we call the Hamiltonian Expression Map, generates device pulse sequences to leverage the capabilities of quantum computing hardware. We illustrate the performance of both maps for calculating time-dependent quantities in the quantum Ising model Hamiltonian.
Related papers
- Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Tensor network renormalization: application to dynamic correlation
functions and non-hermitian systems [0.0]
We present the implementation of the Loop-TNR algorithm, which allows for the computation of dynamical correlation functions.
We highlight that the Loop-TNR algorithm can also be applied to investigate critical properties of non-Hermitian systems.
arXiv Detail & Related papers (2023-11-30T18:34:32Z) - Quantum simulation of dissipation for Maxwell equations in dispersive media [0.0]
dissipation appears in the Schr"odinger representation of classical Maxwell equations as a sparse diagonal operator occupying an $r$-dimensional subspace.
The unitary operators can be implemented through qubit lattice algorithm (QLA) on $n$ qubits.
The non-unitary-dissipative part poses a challenge on how it should be implemented on a quantum computer.
arXiv Detail & Related papers (2023-07-31T18:22:40Z) - The Parametric Complexity of Operator Learning [6.800286371280922]
This paper aims to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a curse of parametric complexity''
The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.
A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Entangled Residual Mappings [59.02488598557491]
We introduce entangled residual mappings to generalize the structure of the residual connections.
An entangled residual mapping replaces the identity skip connections with specialized entangled mappings.
We show that while entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks.
arXiv Detail & Related papers (2022-06-02T19:36:03Z) - Growth of a renormalized operator as a probe of chaos [0.2741266294612776]
We propose that the size of an operator evolved under holographic renormalization group flow shall grow linearly with the scale.
To test this conjecture, we study the operator growth in two different toy models.
arXiv Detail & Related papers (2021-10-28T17:20:17Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - Deep learning with transfer functions: new applications in system
identification [0.0]
This paper presents a linear dynamical operator endowed with a well-defined and efficient back-propagation behavior for automatic derivatives computation.
The operator enables end-to-end training of structured networks containing linear transfer functions and other differentiable units.
arXiv Detail & Related papers (2021-04-20T08:58:55Z) - Relevant OTOC operators: footprints of the classical dynamics [68.8204255655161]
The OTOC-RE theorem relates the OTOCs summed over a complete base of operators to the second Renyi entropy.
We show that the sum over a small set of relevant operators, is enough in order to obtain a very good approximation for the entropy.
In turn, this provides with an alternative natural indicator of complexity, i.e. the scaling of the number of relevant operators with time.
arXiv Detail & Related papers (2020-07-31T19:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.