Global optimization of MPS in quantum-inspired numerical analysis
- URL: http://arxiv.org/abs/2303.09430v2
- Date: Thu, 16 May 2024 16:44:24 GMT
- Title: Global optimization of MPS in quantum-inspired numerical analysis
- Authors: Paula García-Molina, Luca Tagliacozzo, Juan José García-Ripoll,
- Abstract summary: The study focuses on the search for the lowest eigenstates of a Hamiltonian equation.
Five algorithms are introduced: imaginary-time evolution, steepest gradient descent, an improved descent, an implicitly restarted Arnoldi method, and density matrix renormalization group (DMRG) optimization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work discusses the solution of partial differential equations (PDEs) using matrix product states (MPS). The study focuses on the search for the lowest eigenstates of a Hamiltonian equation, for which five algorithms are introduced: imaginary-time evolution, steepest gradient descent, an improved gradient descent, an implicitly restarted Arnoldi method, and density matrix renormalization group (DMRG) optimization. The first four methods are engineered using a framework of limited-precision linear algebra, where operations between MPS and matrix product operators (MPOs) are implemented with finite resources. All methods are benchmarked using the PDE for a quantum harmonic oscillator in up to two dimensions, over a regular grid with up to $2^{28}$ points. Our study reveals that all MPS-based techniques outperform exact diagonalization techniques based on vectors, with respect to memory usage. Imaginary-time algorithms are shown to underperform any type of gradient descent, both in terms of calibration needs and costs. Finally, Arnoldi like methods and DMRG asymptotically outperform all other methods, including exact diagonalization, as problem size increases, with an exponential advantage in memory and time usage.
Related papers
- Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees [6.422262171968397]
Two algorithms are proposed to solve the Euclidean Distance Geometry problem.
First algorithm converges linearly to the true solution.
Second algorithm demonstrates strong numerical performance on both synthetic and real data.
arXiv Detail & Related papers (2024-10-08T21:19:22Z) - GRAPE optimization for open quantum systems with time-dependent
decoherence rates driven by coherent and incoherent controls [77.34726150561087]
The GRadient Ascent Pulse Engineering (GRAPE) method is widely used for optimization in quantum control.
We adopt GRAPE method for optimizing objective functionals for open quantum systems driven by both coherent and incoherent controls.
The efficiency of the algorithm is demonstrated through numerical simulations for the state-to-state transition problem.
arXiv Detail & Related papers (2023-07-17T13:37:18Z) - Machine Learning Discovery of Optimal Quadrature Rules for Isogeometric
Analysis [0.5161531917413708]
We propose the use of machine learning techniques to find optimal quadrature rules in isogeometric analysis.
We find optimal quadrature rules for spline spaces when using IGA discretizations with up to 50 uniform elements and degrees up to 8.
arXiv Detail & Related papers (2023-04-04T13:59:07Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - SHINE: SHaring the INverse Estimate from the forward pass for bi-level
optimization and implicit models [15.541264326378366]
In recent years, implicit deep learning has emerged as a method to increase the depth of deep neural networks.
The training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.
We propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer.
arXiv Detail & Related papers (2021-06-01T15:07:34Z) - Leveraging Non-uniformity in First-order Non-convex Optimization [93.6817946818977]
Non-uniform refinement of objective functions leads to emphNon-uniform Smoothness (NS) and emphNon-uniform Lojasiewicz inequality (NL)
New definitions inspire new geometry-aware first-order methods that converge to global optimality faster than the classical $Omega (1/t2)$ lower bounds.
arXiv Detail & Related papers (2021-05-13T04:23:07Z) - On the Efficient Implementation of the Matrix Exponentiated Gradient
Algorithm for Low-Rank Matrix Optimization [26.858608065417663]
Convex optimization over the spectrahedron has important applications in machine learning, signal processing and statistics.
We propose efficient implementations of MEG, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD on each iteration.
We also provide efficiently-computable certificates for the correct convergence of our methods.
arXiv Detail & Related papers (2020-12-18T19:14:51Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Quantum Gradient Algorithm for General Polynomials [5.008814514502094]
Gradient-based algorithms, popular strategies to optimization problems, are essential for many modern machine-learning techniques.
We propose a quantum gradient algorithm for optimizing numericals with the dressed amplitude.
For the potential values in high-dimension optimizations, this quantum algorithm is supposed to facilitate gradient-optimizations.
arXiv Detail & Related papers (2020-04-23T11:28:45Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.