Optimal Symbolic Construction of Matrix Product Operators and Tree Tensor Network Operators
- URL: http://arxiv.org/abs/2502.18630v3
- Date: Wed, 19 Mar 2025 16:29:36 GMT
- Title: Optimal Symbolic Construction of Matrix Product Operators and Tree Tensor Network Operators
- Authors: Hazar Çakır, Richard M. Milbradt, Christian B. Mendl,
- Abstract summary: This research introduces an improved framework for constructing matrix product operators (MPOs) and tree tensor network operators (TTNOs)<n>A given (Hamiltonian) operator typically has a known symbolic "sum of operator strings" form that can be translated into a tensor network structure.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This research introduces an improved framework for constructing matrix product operators (MPOs) and tree tensor network operators (TTNOs), crucial tools in quantum simulations. A given (Hamiltonian) operator typically has a known symbolic "sum of operator strings" form that can be translated into a tensor network structure. Combining the existing bipartite-graph-based approach and a newly introduced symbolic Gaussian elimination preprocessing step, our proposed method improves upon earlier algorithms in cases when Hamiltonian terms share the same prefactors. We test the performance of our method against established ones for benchmarking purposes. Finally, we apply our methodology to the model of a cavity filled with molecules in a solvent. This open quantum system is cast into the hierarchical equation of motion (HEOM) setting to obtain an effective Hamiltonian. Construction of the corresponding TTNO demonstrates a sub-linear increase of the maximum bond dimension.
Related papers
- A Scalable Quantum Neural Network for Approximate SRBB-Based Unitary Synthesis [1.3108652488669736]
This work introduces scalable quantum neural networks to approximate unitary evolutions through the Standard Recursive Block Basis (SRBB)
An algorithm to reduce the number of CNOTs is proposed, thus deriving a new implementable scaling scheme that requires one single layer of approximation.
The effectiveness of the approximation is measured with different metrics in relation to two gradient-based methods.
arXiv Detail & Related papers (2024-12-04T07:21:23Z) - QuOp: A Quantum Operator Representation for Nodes [0.0]
We derive an intuitive and novel method to represent nodes in a graph with quantum operators.
This method does not require parameter training and is competitive with classical methods on scoring similarity between nodes.
arXiv Detail & Related papers (2024-07-19T13:10:04Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Pymablock: an algorithm and a package for quasi-degenerate perturbation theory [0.0]
We introduce an equivalent effective Hamiltonian as well as a Python package, Pymablock, that implements it.<n>Our algorithm combines an optimal scaling and the ability to handle any number of subspaces with a range of other improvements.<n>We demonstrate how the package handles constructing a k.p model, analyses a superconducting qubit, and computes the low-energy spectrum of a large tight-binding model.
arXiv Detail & Related papers (2024-04-04T18:00:08Z) - The Parametric Complexity of Operator Learning [6.800286371280922]
This paper aims to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a curse of parametric complexity''
The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.
A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Isotropic Gaussian Processes on Finite Spaces of Graphs [71.26737403006778]
We propose a principled way to define Gaussian process priors on various sets of unweighted graphs.
We go further to consider sets of equivalence classes of unweighted graphs and define the appropriate versions of priors thereon.
Inspired by applications in chemistry, we illustrate the proposed techniques on a real molecular property prediction task in the small data regime.
arXiv Detail & Related papers (2022-11-03T10:18:17Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.