Adaptive Quantum Generative Training using an Unbounded Loss Function
- URL: http://arxiv.org/abs/2408.00218v1
- Date: Thu, 1 Aug 2024 01:04:53 GMT
- Title: Adaptive Quantum Generative Training using an Unbounded Loss Function
- Authors: Kyle Sherbert, Jim Furches, Karunya Shirali, Sophia E. Economou, Carlos Ortiz Marrero,
- Abstract summary: We propose a generative quantum learning algorithm, R'enyi-ADAPT, using the Adaptive Derivative-Assembled Problem Tailored ansatz framework.
We benchmark this method against other state-of-the-art adaptive algorithms by learning random two-local thermal states.
We show that R'enyi-ADAPT is capable of constructing shallow quantum circuits competitive with existing methods, while the gradients remain favorable resulting from the maximal R'enyi divergence loss function.
- Score: 1.0485739694839669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a generative quantum learning algorithm, R\'enyi-ADAPT, using the Adaptive Derivative-Assembled Problem Tailored ansatz (ADAPT) framework in which the loss function to be minimized is the maximal quantum R\'enyi divergence of order two, an unbounded function that mitigates barren plateaus which inhibit training variational circuits. We benchmark this method against other state-of-the-art adaptive algorithms by learning random two-local thermal states. We perform numerical experiments on systems of up to 12 qubits, comparing our method to learning algorithms that use linear objective functions, and show that R\'enyi-ADAPT is capable of constructing shallow quantum circuits competitive with existing methods, while the gradients remain favorable resulting from the maximal R\'enyi divergence loss function.
Related papers
- Solving nonlinear PDEs with Quantum Neural Networks: A variational approach to the Bratu Equation [0.0]
We present a variational quantum algorithm (VQA) to solve the nonlinear one-dimensional Bratu equation.<n>Trial solution incorporates both classical approximations and boundary-enforcing terms.
arXiv Detail & Related papers (2026-01-07T20:29:51Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.
Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.
We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - A Globally Convergent Algorithm for Neural Network Parameter
Optimization Based on Difference-of-Convex Functions [29.58728073957055]
We propose an algorithm for optimizing parameters of hidden layer networks.
Specifically, we derive a blockwise (DC-of-the-art) difference function.
arXiv Detail & Related papers (2024-01-15T19:53:35Z) - Reinforcement Learning Based Quantum Circuit Optimization via ZX-Calculus [0.0]
We propose a novel Reinforcement Learning (RL) method for optimizing quantum circuits using graph-theoretic simplification rules of ZX-diagrams.
We demonstrate the capacity of our approach by comparing it against the best performing ZX-Calculus-based algorithm for the problem in hand.
Our approach is ready to be used as a valuable tool for the implementation of quantum algorithms in the near-term intermediate-scale range (NISQ)
arXiv Detail & Related papers (2023-12-18T17:59:43Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.