Conditional Diffusion-based Parameter Generation for Quantum Approximate Optimization Algorithm
- URL: http://arxiv.org/abs/2407.12242v4
- Date: Sat, 18 Jan 2025 04:50:50 GMT
- Title: Conditional Diffusion-based Parameter Generation for Quantum Approximate Optimization Algorithm
- Authors: Fanxu Meng, Xiangzhen Zhou, Pengcheng Zhu, Yu Luo,
- Abstract summary: The Quantum Approximate Optimization (OAQA) is a hybrid that shows promise in efficiently solving the MaxCut problem.
The generative learning model specifically the denoising diffusion (DDPM) is to learn the distribution parameters on the graph dataset.
- Score: 7.48670688063184
- License:
- Abstract: The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical algorithm that shows promise in efficiently solving the MaxCut problem, a representative example of combinatorial optimization. However, its effectiveness heavily depends on the parameter optimization pipeline, where the parameter initialization strategy is nontrivial due to the non-convex and complex optimization landscapes characterized by issues with low-quality local minima. Recent inspiration comes from the diffusion of classical neural network parameters, which has demonstrated that neural network training can benefit from generating good initial parameters through diffusion models. Therefore, in this work, we formulate the problem of finding good initial parameters as a generative task and propose the initial parameter generation scheme through dataset-conditioned pre-trained parameter sampling. Concretely, the generative machine learning model, specifically the denoising diffusion probabilistic model (DDPM), is trained to learn the distribution of pretrained parameters conditioned on the graph dataset. Intuitively, our proposed framework aims at effectively distilling knowledge from pre-trained parameters to generate well-performing initial parameters for QAOA. Compared to random parameter initialization, experiments on various-sized Max-Cut problem instances consistently show that our conditional DDPM is capable of improving the approximation ratio by as much as 14.4%, 11.0%, 11.4% and 7.49%, 8.31%, 6.08% on average for random, regular, and Watts-Strogatz graphs, respectively. Additionally, the experimental results also indicate that the conditional DDPM trained on small problem instances can be extrapolated to larger ones, improving the approximation ratio by up to 28.4% and 12.1% on average.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Hybrid GRU-CNN Bilinear Parameters Initialization for Quantum
Approximate Optimization Algorithm [7.502733639318316]
We propose a hybrid optimization approach that integrates Gated Recurrent Units (GRU), Conal Neural Networks (CNN), and a bilinear strategy as an innovative alternative to conventional approximations for predicting optimal parameters of QAOA circuits.
We employ the bilinear strategy to initialize to QAOA circuit parameters at greater depths, with reference parameters obtained from GRU-CNN optimization.
arXiv Detail & Related papers (2023-11-14T03:00:39Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Regularized Nonlinear Regression for Simultaneously Selecting and
Estimating Key Model Parameters [1.6122433144430324]
In system identification, estimating parameters of a model using limited observations results in poor identifiability.
We propose a new method to simultaneously select and estimate sensitive parameters as key model parameters and fix the remaining parameters to a set of typical values.
arXiv Detail & Related papers (2021-04-23T06:17:57Z) - A Population-based Hybrid Approach to Hyperparameter Optimization for
Neural Networks [0.0]
HBRKGA is a hybrid approach that combines the Biased Random Key Genetic Algorithm with a Random Walk technique to search the hyper parameter space efficiently.
Results showed that HBRKGA could find hyper parameter configurations that outperformed the baseline methods in six out of eight datasets.
arXiv Detail & Related papers (2020-11-22T17:12:31Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.