Reductions of QAOA Induced by Classical Symmetries: Theoretical Insights and Practical Implications
- URL: http://arxiv.org/abs/2602.16141v1
- Date: Wed, 18 Feb 2026 02:20:42 GMT
- Title: Reductions of QAOA Induced by Classical Symmetries: Theoretical Insights and Practical Implications
- Authors: Boris Tsvelikhovskiy, Bao Bach, Jose Falla, Ilya Safro,
- Abstract summary: We show that classical symmetries can be systematically exploited as a design principle for QAOA.<n>We show that the structure of the Lie algebra can change dramatically depending on which variable is held fixed.<n>Results establish symmetry-aware reduction as a principled tool for designing expressive and potentially trainable QAOA circuits.
- Score: 0.35398689122254773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of the Quantum Approximate Optimization Algorithm (QAOA) is closely tied to the structure of the dynamical Lie algebra (DLA) generated by its Hamiltonians, which determines both its expressivity and trainability. In this work, we show that classical symmetries can be systematically exploited as a design principle for QAOA. Focusing on the MaxCut problem with global bit-flip symmetry, we analyze reduced QAOA instances obtained by fixing a single variable and study how this choice affects the associated DLAs. We show that the structure of the DLAs can change dramatically depending on which variable is held fixed. In particular, we construct explicit examples where the dimension collapses from exponential to quadratic, uncovering phenomena that do not appear in the original formulation. Numerical experiments on asymmetric graphs indicate that such reductions often produce DLAs of much smaller dimension, suggesting improved trainability. We also prove that any graph can be embedded into a slightly larger one (requiring only quadratic overhead) such that the standard reduced DLA coincides with the free reduced DLA, in most cases implying exponential dimension and irreducibility on the Hilbert space for reduced QAOA instances. These results establish symmetry-aware reduction as a principled tool for designing expressive and potentially trainable QAOA circuits.
Related papers
- Stability and Generalization of Push-Sum Based Decentralized Optimization over Directed Graphs [55.77845440440496]
Push-based decentralized communication enables optimization over communication networks, where information exchange may be asymmetric.<n>We develop a unified uniform-stability framework for the Gradient Push (SGP) algorithm.<n>A key technical ingredient is an imbalance-aware generalization bound through two quantities.
arXiv Detail & Related papers (2026-02-24T05:32:03Z) - Rethinking Diffusion Models with Symmetries through Canonicalization with Applications to Molecular Graph Generation [56.361076943802594]
CanonFlow achieves state-of-the-art performance on the challenging GEOM-DRUG dataset, and the advantage remains large in few-step generation.
arXiv Detail & Related papers (2026-02-16T18:58:55Z) - Preconditioning Benefits of Spectral Orthogonalization in Muon [50.62925024212989]
We study the effectiveness of a simplified variant of Muon in two case studies: matrix factorization and in-context learning of linear transformers.<n>Our analysis reveals that the Muon dynamics decouple into a collection of independent scalar sequences in the spectral domain, each exhibiting similar convergence behavior.
arXiv Detail & Related papers (2026-01-20T00:08:31Z) - SIGMA: Scalable Spectral Insights for LLM Collapse [51.863164847253366]
We introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework for model collapse.<n>By utilizing benchmarks that deriving and deterministic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space.<n>We demonstrate that SIGMA effectively captures the transition towards states, offering both theoretical insights into the mechanics of collapse.
arXiv Detail & Related papers (2026-01-06T19:47:11Z) - On the dynamical Lie algebras of quantum approximate optimization algorithms [4.987686869768721]
Dynamical Lie algebras (DLAs) have emerged as a valuable tool in the study of parameterized quantum circuits.
In this work, we investigate DLAs for the quantum approximate optimization algorithm (QAOA)
We show that the dimension of the DLA is $O(n3)$ and give an explicit basis for the DLA.
arXiv Detail & Related papers (2024-07-17T14:12:30Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Symmetries and Dimension Reduction in Quantum Approximate Optimization
Algorithm [1.3469999282609788]
We focus on the generalized formulation of optimization problems defined on the sets of $n-element $d$-ary strings.
Our main contribution encompasses dimension for the originally proposed QAOA.
Restricting the algorithm to spaces of smaller dimension may lead to significant acceleration of both quantum and classical simulation of circuits.
arXiv Detail & Related papers (2023-09-25T00:35:40Z) - The Adjoint Is All You Need: Characterizing Barren Plateaus in Quantum
Ans\"atze [3.2773906224402802]
We formulate a theory of Barren Plateaus for parameterized quantum circuits whose observables lie in their Lie algebra (DLA)
For the first time, our theory provides, for the first time, the ability to compute the variance of the variance of the gradient cost function of the quantum compound ansatz.
arXiv Detail & Related papers (2023-09-14T17:50:04Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Asymmetric Polynomial Loss For Multi-Label Classification [24.67744795531103]
We propose an effective Asymmetric Polynomial Loss (APL) to mitigate the above issues.
We employ the asymmetric focusing mechanism to recalibrate the gradient contribution from the negative and positive samples.
Experiments show that our APL loss can consistently improve performance without extra training burden.
arXiv Detail & Related papers (2023-04-10T14:35:47Z) - On the Implicit Geometry of Cross-Entropy Parameterizations for
Label-Imbalanced Data [26.310275682709776]
Various logit-adjusted parameterizations of the cross-entropy (CE) loss have been proposed as alternatives to weighted CE large models on labelimbalanced data.
We show that logit-adjusted parameterizations can be appropriately tuned to learn to learn irrespective of the minority imbalance ratio.
arXiv Detail & Related papers (2023-03-14T03:04:37Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.