The Lie Algebra of XY-mixer Topologies and Warm Starting QAOA for Constrained Optimization
- URL: http://arxiv.org/abs/2505.18396v2
- Date: Mon, 09 Jun 2025 20:49:16 GMT
- Title: The Lie Algebra of XY-mixer Topologies and Warm Starting QAOA for Constrained Optimization
- Authors: Steven Kordonowy, Hannes Leipold,
- Abstract summary: The XY-mixer has widespread utilization in modern quantum computing, including in variational quantum algorithms.<n>We give explicit decompositions of the dynamical Lie algebras associated with a variety of $XY$-mixer topologies.<n>We provide numerical simulations showcasing these concepts on Portfolio Optimization, Sparsest $k$-Subgraph, and Graphing problems.
- Score: 0.9208007322096533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The XY-mixer has widespread utilization in modern quantum computing, including in variational quantum algorithms, such as Quantum Alternating Operator Ansatz (QAOA). The XY ansatz is particularly useful for solving Cardinality Constrained Optimization tasks, a large class of important NP-hard problems. First, we give explicit decompositions of the dynamical Lie algebras (DLAs) associated with a variety of $XY$-mixer topologies. When these DLAs admit simple Lie algebra decompositions, they are efficiently trainable. An example of this scenario is a ring $XY$-mixer with arbitrary $R_Z$ gates. Conversely, when we allow for all-to-all $XY$-mixers or include $R_{ZZ}$ gates, the DLAs grow exponentially and are no longer efficiently trainable. We provide numerical simulations showcasing these concepts on Portfolio Optimization, Sparsest $k$-Subgraph, and Graph Partitioning problems. These problems correspond to exponentially-large DLAs and we are able to warm-start these optimizations by pre-training on polynomial-sized DLAs by restricting the gate generators. This results in improved convergence to high quality optima of the original task, providing dramatic performance benefits in terms of solution sampling and approximation ratio on optimization tasks for both shared angle and multi-angle QAOA.
Related papers
- Minor Embedding for Quantum Annealing with Reinforcement Learning [10.787328610467801]
Reinforcement Learning (RL) offers a promising alternative by treating minor embedding as a sequential decision-making problem.<n>We propose a RL-based approach to minor embedding using a Proximal Policy Optimization agent, testing its ability to embed both fully connected and randomly generated problem.<n>Our proposed approach is also able to scale to moderate problem sizes and adapts well to different graph structures, highlighting RL's potential as a flexible and general-purpose framework.
arXiv Detail & Related papers (2025-07-21T18:54:15Z) - Near-Optimal Online Learning for Multi-Agent Submodular Coordination: Tight Approximation and Communication Efficiency [52.60557300927007]
We present a $textbfMA-OSMA$ algorithm to transfer the discrete submodular problem into a continuous optimization.<n>We also introduce a projection-free $textbfMA-OSEA$ algorithm, which effectively utilizes the KL divergence by mixing a uniform distribution.<n>Our algorithms significantly improve the $(frac11+c)$-approximation provided by the state-of-the-art OSG algorithm.
arXiv Detail & Related papers (2025-02-07T15:57:56Z) - On the dynamical Lie algebras of quantum approximate optimization algorithms [4.987686869768721]
Dynamical Lie algebras (DLAs) have emerged as a valuable tool in the study of parameterized quantum circuits.
In this work, we investigate DLAs for the quantum approximate optimization algorithm (QAOA)
We show that the dimension of the DLA is $O(n3)$ and give an explicit basis for the DLA.
arXiv Detail & Related papers (2024-07-17T14:12:30Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Optimization-based Block Coordinate Gradient Coding for Mitigating
Partial Stragglers in Distributed Learning [58.91954425047425]
This paper aims to design a new gradient coding scheme for mitigating partial stragglers in distributed learning.
We propose a gradient coordinate coding scheme with L coding parameters representing L possibly different diversities for the L coordinates, which generates most gradient coding schemes.
arXiv Detail & Related papers (2022-06-06T09:25:40Z) - How Much Entanglement Do Quantum Optimization Algorithms Require? [0.0]
We study the entanglement generated during the execution of ADAPT-QAOA.
By incrementally restricting this flexibility, we find that a larger amount of entanglement entropy at earlier stages coincides with faster convergence at later stages.
arXiv Detail & Related papers (2022-05-24T18:00:02Z) - Constrained mixers for the quantum approximate optimization algorithm [55.41644538483948]
We present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space.
We generalize the "XY"-mixer designed to preserve the subspace of "one-hot" states to the general case of subspaces given by a number of computational basis states.
Our analysis also leads to valid Trotterizations for "XY"-mixer with fewer CX gates than is known to date.
arXiv Detail & Related papers (2022-03-11T17:19:26Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Predicting parameters for the Quantum Approximate Optimization Algorithm
for MAX-CUT from the infinite-size limit [0.05076419064097732]
We present an explicit algorithm to evaluate the performance of QAOA on MAX-CUT applied to random Erdos-Renyi graphs of expected degree $d$.
This analysis yields an explicit mapping between QAOA parameters for MAX-CUT on Erdos-Renyi graphs, and the Sherrington-Kirkpatrick model.
arXiv Detail & Related papers (2021-10-20T17:58:53Z) - Grover Mixers for QAOA: Shifting Complexity from Mixer Design to State
Preparation [0.0]
We propose a variation of the Quantum Alternating Operator Ansatz (QAOA) that uses Grover-like selective phase shift mixing operators.
GM-QAOA works on any NP optimization problem for which it is possible to efficiently prepare an equal superposition of all feasible solutions.
arXiv Detail & Related papers (2020-05-30T20:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.