Toward Trainability of Deep Quantum Neural Networks
- URL: http://arxiv.org/abs/2112.15002v2
- Date: Mon, 26 Sep 2022 10:34:24 GMT
- Title: Toward Trainability of Deep Quantum Neural Networks
- Authors: Kaining Zhang and Min-Hsiu Hsieh and Liu Liu and Dacheng Tao
- Abstract summary: Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
- Score: 87.04438831673063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Neural Networks (QNNs) with random structures have poor trainability
due to the exponentially vanishing gradient as the circuit depth and the qubit
number increase. This result leads to a general belief that a deep QNN will not
be feasible. In this work, we provide the first viable solution to the
vanishing gradient problem for deep QNNs with theoretical guarantees.
Specifically, we prove that for circuits with controlled-layer architectures,
the expectation of the gradient norm can be lower bounded by a value that is
independent of the qubit number and the circuit depth. Our results follow from
a careful analysis of the gradient behaviour on parameter space consisting of
rotation angles, as employed in almost any QNNs, instead of relying on
impractical 2-design assumptions. We explicitly construct examples where only
our QNNs are trainable and converge, while others in comparison cannot.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.