Analytic gradients in variational quantum algorithms: Algebraic
extensions of the parameter-shift rule to general unitary transformations
- URL: http://arxiv.org/abs/2107.08131v4
- Date: Sun, 19 Dec 2021 19:37:14 GMT
- Title: Analytic gradients in variational quantum algorithms: Algebraic
extensions of the parameter-shift rule to general unitary transformations
- Authors: Artur F. Izmaylov, Robert A. Lang, and Tzu-Ching Yen
- Abstract summary: We propose several extensions of the parametric-shift-rule to formulating gradients as linear combinations of expectation values for generators with general eigen-spectrum.
Our approaches are exact and do not use any auxiliary qubits, instead they rely on a generator eigen-spectrum analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization of unitary transformations in Variational Quantum Algorithms
benefits highly from efficient evaluation of cost function gradients with
respect to amplitudes of unitary generators. We propose several extensions of
the parametric-shift-rule to formulating these gradients as linear combinations
of expectation values for generators with general eigen-spectrum (i.e. with
more than two eigenvalues). Our approaches are exact and do not use any
auxiliary qubits, instead they rely on a generator eigen-spectrum analysis. Two
main directions in the parametric-shift-rule extensions are 1) polynomial
expansion of the exponential unitary operator based on a limited number of
different eigenvalues in the generator and 2) decomposition of the generator as
a linear combination of low-eigenvalue operators (e.g. operators with only 2 or
3 eigenvalues). These techniques have a range of scalings for the number of
needed expectation values with the number of generator eigenvalues from
quadratic (for polynomial expansion) to linear and even $\log_2$ (for generator
decompositions). This allowed us to propose efficient differentiation schemes
superior to previous approaches for commonly used 2-qubit transformations (e.g.
match-gates, transmon and fSim gates) and $\hat S^2$-conserving fermionic
operators for the variational quantum eigensolver.
Related papers
- Efficient conversion from fermionic Gaussian states to matrix product states [48.225436651971805]
We propose a highly efficient algorithm that converts fermionic Gaussian states to matrix product states.
It can be formulated for finite-size systems without translation invariance, but becomes particularly appealing when applied to infinite systems.
The potential of our method is demonstrated by numerical calculations in two chiral spin liquids.
arXiv Detail & Related papers (2024-08-02T10:15:26Z) - Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction [55.57072563835959]
spectral graph neural networks are characterized by filters.
We propose an eigenvalue correction strategy that can free filters from the constraints of repeated eigenvalue inputs.
arXiv Detail & Related papers (2024-01-28T08:12:00Z) - Quantum eigenvalue processing [0.0]
Problems in linear algebra can be solved on a quantum computer by processing eigenvalues of the non-normal input matrices.
We present a Quantum EigenValue Transformation (QEVT) framework for applying arbitrary transformations on eigenvalues of block-encoded non-normal operators.
We also present a Quantum EigenValue Estimation (QEVE) algorithm for operators with real spectra.
arXiv Detail & Related papers (2024-01-11T19:49:31Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Noisy Tensor Ring approximation for computing gradients of Variational
Quantum Eigensolver for Combinatorial Optimization [33.12181620473604]
Variational Quantum algorithms have established their potential to provide computational advantage in the realm of optimization.
These algorithms suffer from classically intractable gradients limiting the scalability.
This work proposes a classical gradient method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation.
arXiv Detail & Related papers (2023-07-08T03:14:28Z) - Graph Positional Encoding via Random Feature Propagation [39.84324765957645]
Two main families of node feature augmentation schemes have been explored for enhancing GNNs.
We propose a novel family of positional encoding schemes which draws a link between the above two approaches.
We empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.
arXiv Detail & Related papers (2023-03-06T06:28:20Z) - Fourier-based quantum signal processing [0.0]
Implementing general functions of operators is a powerful tool in quantum computation.
Quantum signal processing is the state of the art for this aim.
We present an algorithm for Hermitian-operator function design from an oracle given by the unitary evolution.
arXiv Detail & Related papers (2022-06-06T18:02:30Z) - Generalized Inversion of Nonlinear Operators [6.191418251390628]
Inversion of operators is a fundamental concept in data processing.
Most notable is the Moore-Penrose inverse, widely used in physics, statistics, and various fields of engineering.
arXiv Detail & Related papers (2021-11-21T07:15:37Z) - Generalized quantum circuit differentiation rules [23.87373187143897]
Variational quantum algorithms that are used for quantum machine learning rely on the ability to automatically differentiate parametrized quantum circuits.
Here, we propose the rules for differentiating quantum circuits (unitaries) with arbitrary generators.
arXiv Detail & Related papers (2021-08-03T00:29:45Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z) - Supervised Quantile Normalization for Low-rank Matrix Approximation [50.445371939523305]
We learn the parameters of quantile normalization operators that can operate row-wise on the values of $X$ and/or of its factorization $UV$ to improve the quality of the low-rank representation of $X$ itself.
We demonstrate the applicability of these techniques on synthetic and genomics datasets.
arXiv Detail & Related papers (2020-02-08T21:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.