Efficient State Preparation for the Schwinger Model with a Theta Term
- URL: http://arxiv.org/abs/2411.00243v1
- Date: Thu, 31 Oct 2024 22:49:09 GMT
- Title: Efficient State Preparation for the Schwinger Model with a Theta Term
- Authors: Alexei Bazavov, Brandon Henke, Leon Hostetler, Dean Lee, Huey-Wen Lin, Giovanni Pederiva, Andrea Shindler,
- Abstract summary: We present a comparison of different quantum state preparation algorithms for the Schwinger model.
We obtain the best results when combining the blocked QAOA ansatz and the RA.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a comparison of different quantum state preparation algorithms and their overall efficiency for the Schwinger model with a theta term. While adiabatic state preparation (ASP) is proved to be effective, in practice it leads to large CNOT gate counts to prepare the ground state. The quantum approximate optimization algorithm (QAOA) provides excellent results while keeping the CNOT counts small by design, at the cost of an expensive classical minimization process. We introduce a ``blocked'' modification of the Schwinger Hamiltonian to be used in the QAOA that further decreases the length of the algorithms as the size of the problem is increased. The rodeo algorithm (RA) provides a powerful tool to efficiently prepare any eigenstate of the Hamiltonian, as long as its overlap with the initial guess is large enough. We obtain the best results when combining the blocked QAOA ansatz and the RA, as this provides an excellent initial state with a relatively short algorithm without the need to perform any classical steps for large problem sizes.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Quantum State Preparation Using an Exact CNOT Synthesis Formulation [8.078625517374967]
Minimizing the use of CNOT gates in quantum state preparation is a crucial step in quantum compilation.
We propose an effective state preparation algorithm using an exact CNOT synthesis formulation.
arXiv Detail & Related papers (2024-01-02T03:37:00Z) - ADAPT-QAOA with a classically inspired initial state [0.0]
We propose to start ADAPT-QAOA with an initial state inspired by a classical approximation algorithm.
We show that this new algorithm can reach the same accuracy with fewer layers than the standard QAOA and the original ADAPT-QAOA.
arXiv Detail & Related papers (2023-10-15T01:12:12Z) - Entropic Neural Optimal Transport via Diffusion Processes [105.34822201378763]
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions.
Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr"odinger Bridge problem.
In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step.
arXiv Detail & Related papers (2022-11-02T14:35:13Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Adapting Quantum Approximation Optimization Algorithm (QAOA) for Unit
Commitment [2.8060379263058794]
We formulate and apply a hybrid quantum-classical algorithm to a power system optimization problem called Unit Commitment.
Our algorithm extends the Quantum Approximation Optimization Algorithm (QAOA) with a classical minimizer in order to support mixed binary optimization.
Our results indicate that classical solvers are effective for our simulated Unit Commitment instances with fewer than 400 power generation units.
arXiv Detail & Related papers (2021-10-25T03:37:34Z) - Digitized-counterdiabatic quantum approximate optimization algorithm [3.0638256603183054]
We propose a digitized version of QAOA enhanced via the use of shortcuts to adiabaticity.
We apply our digitized-counterdiabatic QAOA to Ising models, classical optimization problems, and the P-spin model, demonstrating that it outperforms standard QAOA in all cases.
arXiv Detail & Related papers (2021-07-06T17:57:32Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.