A Non-Convex Optimization Strategy for Computing Convex-Roof Entanglement
- URL: http://arxiv.org/abs/2412.10166v1
- Date: Fri, 13 Dec 2024 14:29:02 GMT
- Title: A Non-Convex Optimization Strategy for Computing Convex-Roof Entanglement
- Authors: Jimmie Adriazola, Katarzyna Roszak,
- Abstract summary: We develop a numerical methodology for the computation of entanglement measures for mixed states.
We find that the method works well enough to reliably reproduce curves.
- Score: 0.0
- License:
- Abstract: We develop a numerical methodology for the computation of entanglement measures for mixed quantum states. Using the well-known Schr\"odinger-HJW theorem, the computation of convex roof entanglement measures is reframed as a search for unitary matrices; a nonconvex optimization problem. To address this non-convexity, we modify a genetic algorithm, known in the literature as differential evolution, constraining the search space to unitary matrices by using a QR factorization. We then refine results using a quasi-Newton method. We benchmark our method on simple test problems and, as an application, compute entanglement between a system and its environment over time for pure dephasing evolutions. We also study the temperature dependence of Gibbs state entanglement for a class of block-diagonal Hamiltonians to provide a complementary test scenario with a set of entangled states that are qualitatively different. We find that the method works well enough to reliably reproduce entanglement curves, even for comparatively large systems. To our knowledge, the modified genetic algorithm represents the first derivative-free and non-convex computational method that broadly applies to the computation of convex roof entanglement measures.
Related papers
- Symmetric Rank-One Quasi-Newton Methods for Deep Learning Using Cubic Regularization [0.5120567378386615]
First-order descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning.
However, these methods do not exploit curvature information.
Quasi-Newton methods re-use previously computed low Hessian approximations.
arXiv Detail & Related papers (2025-02-17T20:20:11Z) - Explicit near-optimal quantum algorithm for solving the advection-diffusion equation [0.0]
An explicit quantum algorithm is proposed for modeling dissipative initial-value problems.
We propose a quantum circuit based on a simple coordinate transformation that turns the dependence on the summation index into a trigonometric function.
The proposed algorithm can be used for modeling a wide class of nonunitary initial-value problems.
arXiv Detail & Related papers (2025-01-19T19:03:29Z) - Demonstration of Scalability and Accuracy of Variational Quantum Linear Solver for Computational Fluid Dynamics [0.0]
This paper presents an exploration of quantum methodologies aimed at achieving high accuracy in solving such a large system of equations.
We consider the 2D, transient, incompressible, viscous, non-linear coupled Burgers equation as a test problem.
Our findings demonstrate that our quantum methods yield results comparable in accuracy to traditional approaches.
arXiv Detail & Related papers (2024-09-05T04:42:24Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Neural incomplete factorization: learning preconditioners for the conjugate gradient method [2.899792823251184]
We develop a data-driven approach to accelerate the generation of effective preconditioners.
We replace the typically hand-engineered preconditioners by the output of graph neural networks.
Our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF)
arXiv Detail & Related papers (2023-05-25T11:45:46Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - Local optimization on pure Gaussian state manifolds [63.76263875368856]
We exploit insights into the geometry of bosonic and fermionic Gaussian states to develop an efficient local optimization algorithm.
The method is based on notions of descent gradient attuned to the local geometry.
We use the presented methods to collect numerical and analytical evidence for the conjecture that Gaussian purifications are sufficient to compute the entanglement of purification of arbitrary mixed Gaussian states.
arXiv Detail & Related papers (2020-09-24T18:00:36Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.