A Scalable Quantum Neural Network for Approximate SRBB-Based Unitary Synthesis
- URL: http://arxiv.org/abs/2412.03083v1
- Date: Wed, 04 Dec 2024 07:21:23 GMT
- Title: A Scalable Quantum Neural Network for Approximate SRBB-Based Unitary Synthesis
- Authors: Giacomo Belli, Marco Mordacci, Michele Amoretti,
- Abstract summary: This work introduces scalable quantum neural networks to approximate unitary evolutions through the Standard Recursive Block Basis (SRBB)
An algorithm to reduce the number of CNOTs is proposed, thus deriving a new implementable scaling scheme that requires one single layer of approximation.
The effectiveness of the approximation is measured with different metrics in relation to two gradient-based methods.
- Score: 1.3108652488669736
- License:
- Abstract: In this work, scalable quantum neural networks are introduced to approximate unitary evolutions through the Standard Recursive Block Basis (SRBB) and, subsequently, redesigned with a reduced number of CNOTs. This algebraic approach to the problem of unitary synthesis exploits Lie algebras and their topological features to obtain scalable parameterizations of unitary operators. First, the recursive algorithm that builds the SRBB is presented, framed in the original scalability scheme already known to the literature only from a theoretical point of view. Unexpectedly, 2-qubit systems emerge as a special case outside this scheme. Furthermore, an algorithm to reduce the number of CNOTs is proposed, thus deriving a new implementable scaling scheme that requires one single layer of approximation. From the mathematical algorithm, the scalable CNOT-reduced quantum neural network is implemented and its performance is assessed with a variety of different unitary matrices, both sparse and dense, up to 6 qubits via the PennyLane library. The effectiveness of the approximation is measured with different metrics in relation to two optimizers: a gradient-based method and the Nelder-Mead method. The approximate SRBB-based synthesis algorithm with CNOT-reduction is also tested on real hardware and compared with other valid approximation and decomposition methods available in the literature.
Related papers
- Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks [15.074950361970194]
We provide a unified analysis for a family of algorithms that encompasses IRLS, the recently proposed linlin-RFM algorithm, and the alternating diagonal neural networks.
We show that, with appropriately chosen reweighting policy, a handful of sparse structures can achieve favorable performance.
We also show that leveraging this in the reweighting scheme provably improves test error compared to coordinate-wise reweighting.
arXiv Detail & Related papers (2024-06-04T20:37:17Z) - Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach [12.679411410749521]
We show that it is possible to parametrize the algorithm space for lattice reduction problem with neural networks and find an algorithm without supervised data.
We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases.
We show that this approach yields an algorithm with comparable complexity and performance to the Lenstra-Lenstra-Lov'asz algorithm on a set of benchmarks.
arXiv Detail & Related papers (2023-11-14T13:54:35Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Natural Thresholding Algorithms [18.131412357510158]
Natural Thresholding (NT) has been proposed with improved computational efficiency.
This paper proposes convergence guarantees for natural thresholding algorithms by extending the deterministic version with linear measurements.
arXiv Detail & Related papers (2023-06-07T18:49:19Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.