Parameter optimization comparison in QAOA using Stochastic Hill Climbing with Random Re-starts and Local Search with entangled and non-entangled mixing operators
- URL: http://arxiv.org/abs/2405.08941v2
- Date: Wed, 16 Oct 2024 18:57:16 GMT
- Title: Parameter optimization comparison in QAOA using Stochastic Hill Climbing with Random Re-starts and Local Search with entangled and non-entangled mixing operators
- Authors: Brian GarcĂa Sarmina, Guo-Hua Sun, Shi-Hai Dong,
- Abstract summary: This study investigates the efficacy of Hill Climbing with Random Restarts (SHC-RR) compared to Local Search (LS) strategies.
Our results consistently show that SHC-RR outperforms LS approaches, showcasing superior efficacy despite its ostensibly simpler optimization mechanism.
- Score: 0.0
- License:
- Abstract: This study investigates the efficacy of Stochastic Hill Climbing with Random Restarts (SHC-RR) compared to Local Search (LS) strategies within the Quantum Approximate Optimization Algorithm (QAOA) framework across various problem models. Employing uniform parameter settings, including the number of restarts and SHC steps, we analyze LS with two distinct perturbation operations: multiplication and summation. Our comparative analysis encompasses multiple versions of max-cut and random Ising model (RI) problems, utilizing QAOA models with depths ranging from $1L$ to $3L$. These models incorporate diverse mixing operator configurations, which integrate $RX$ and $RY$ gates, and explore the effects of an entanglement stage within the mixing operator. Our results consistently show that SHC-RR outperforms LS approaches, showcasing superior efficacy despite its ostensibly simpler optimization mechanism. Furthermore, we observe that the inclusion of entanglement stages within mixing operators significantly impacts model performance, either enhancing or diminishing results depending on the specific problem context.
Related papers
- Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - Adaptive SGD with Polyak stepsize and Line-search: Robust Convergence
and Variance Reduction [26.9632099249085]
We propose two new variants of SPS and SLS, called AdaSPS and AdaSLS, which guarantee convergence in non-interpolation settings.
We equip AdaSPS and AdaSLS with a novel variance reduction technique and obtain algorithms that require $smashwidetildemathcalO(n+1/epsilon)$ gradient evaluations.
arXiv Detail & Related papers (2023-08-11T10:17:29Z) - PCA and t-SNE analysis in the study of QAOA entangled and non-entangled
mixing operators [0.0]
We employ PCA and t-SNE analysis to gain deeper insights into the behavior of entangled and non-entangled mixing operators.
Specifically, we examine the $RZ$, $RX$, and $RY$ parameters within QAOA models at depths of $1L$, $2L$, and $3L$.
The results reveal distinct behaviors when we process the final parameters of each set of experiments with PCA and t-SNE, where in particular, entangled QAOA models with $2L$ and $3L$ present an increase in the amount of information that can be preserved in the mapping
arXiv Detail & Related papers (2023-06-19T16:50:32Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Study of Robust Sparsity-Aware RLS algorithms with Jointly-Optimized
Parameters for Impulsive Noise Environments [0.0]
The proposed algorithm generalizes multiple algorithms only by replacing the specified criterion of robustness and sparsity-aware penalty.
By jointly optimizing the forgetting factor and the sparsity penalty parameter, we develop the jointly-optimized S-RRLS (JO-S-RRLS) algorithm.
Simulations in impulsive noise scenarios demonstrate that the proposed S-RRLS and JO-S-RRLS algorithms outperform existing techniques.
arXiv Detail & Related papers (2022-04-09T01:13:26Z) - Learning to Schedule Heuristics for the Simultaneous Stochastic
Optimization of Mining Complexes [2.538209532048867]
The proposed learn-to-perturb (L2P) hyper-heuristic is a multi-neighborhood simulated annealing algorithm.
L2P is tested on several real-world mining complexes, with an emphasis on efficiency, robustness, and generalization capacity.
Results show a reduction in the number of iterations by 30-50% and in the computational time by 30-45%.
arXiv Detail & Related papers (2022-02-25T18:20:14Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Modeling and mitigation of cross-talk effects in readout noise with
applications to the Quantum Approximate Optimization Algorithm [0.0]
Noise mitigation can be performed up to some error for which we derive upper bounds.
Experiments on 15 (23) qubits using IBM's devices to test both the noise model and the error-mitigation scheme.
We show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.
arXiv Detail & Related papers (2021-01-07T02:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.