Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model
- URL: http://arxiv.org/abs/2407.12242v3
- Date: Fri, 19 Jul 2024 01:10:45 GMT
- Title: Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model
- Authors: Fanxu Meng, Xiangzhen Zhou,
- Abstract summary: Quantum computing presents a prospect for revolutionizing the field of probabilistic optimization.
We present the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm.
We show that the diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones.
- Score: 3.6959187484738902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum computing presents a compelling prospect for revolutionizing the field of combinatorial optimization, in virtue of the unique attributes of quantum mechanics such as superposition and entanglement. The Quantum Approximate Optimization Algorithm (QAOA), which is a variational hybrid quantum-classical algorithm, stands out as leading proposals to efficiently solve the Max-Cut problem, a representative example of combinatorial optimization. However, its promised advantages strongly rely on parameters initialization strategy, a critical aspect due to the non-convex and complex optimization landscapes characterized by low-quality local minima issues. Therefore, in this work, we formulate the problem of finding good initial parameters as a generative task in which the generative machine learning model, specifically the denoising diffusion probabilistic model (DDPM), is trained to generate high-performing initial parameters for QAOA. The diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones. Experiments with various sized Max-Cut problem instances demonstrate that our diffusion process consistently enhances QAOA effectiveness compared to random parameters initialization. Moreover, our framework indicates the capacity of training on small, classically simulatable problem instances, aiming at extrapolating to larger instances to reduce quantum computational resource overhead.
Related papers
- Efficient and Robust Parameter Optimization of the Unitary Coupled-Cluster Ansatz [4.607081302947026]
We propose sequential optimization with approximate parabola (SOAP) for parameter optimization of unitary coupled-cluster ansatz on quantum computers.
Numerical benchmark studies on molecular systems demonstrate that SOAP achieves significantly faster convergence and greater robustness to noise.
SOAP is further validated through experiments on a superconducting quantum computer using a 2-qubit model system.
arXiv Detail & Related papers (2024-01-10T03:30:39Z) - Probabilistic tensor optimization of quantum circuits for the
max-$k$-cut problem [0.0]
We propose a technique for optimizing parameterized circuits in variational quantum algorithms.
We illustrate our approach on the example of the quantum approximate optimization algorithm (QAOA) applied to the max-$k$-cut problem.
arXiv Detail & Related papers (2023-10-16T12:56:22Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Benchmarking Metaheuristic-Integrated QAOA against Quantum Annealing [0.0]
The study provides insights into the strengths and limitations of both Quantum Annealing and metaheuristic-integrated QAOA across different problem domains.
The findings suggest that the hybrid approach can leverage classical optimization strategies to enhance the solution quality and convergence speed of QAOA.
arXiv Detail & Related papers (2023-09-28T18:55:22Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Meta-Learning Digitized-Counterdiabatic Quantum Optimization [3.0638256603183054]
We tackle the problem of finding suitable initial parameters for variational optimization by employing a meta-learning technique using recurrent neural networks.
We investigate this technique with the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (DC-QAOA)
The combination of meta learning and DC-QAOA enables us to find optimal initial parameters for different models, such as MaxCut problem and the Sherrington-Kirkpatrick model.
arXiv Detail & Related papers (2022-06-20T18:57:50Z) - Parameters Fixing Strategy for Quantum Approximate Optimization
Algorithm [0.0]
We propose a strategy to give high approximation ratio on average, even at large circuit depths, by initializing QAOA with the optimal parameters obtained from the previous depths.
We test our strategy on the Max-cut problem of certain classes of graphs such as the 3-regular graphs and the Erd"os-R'enyi graphs.
arXiv Detail & Related papers (2021-08-11T15:44:16Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.