Efficient classical computation of the neural tangent kernel of quantum neural networks
- URL: http://arxiv.org/abs/2508.04498v1
- Date: Wed, 06 Aug 2025 14:48:01 GMT
- Title: Efficient classical computation of the neural tangent kernel of quantum neural networks
- Authors: Anderson Melchor Hernandez, Davide Pastorello, Giacomo De Palma,
- Abstract summary: We propose an efficient algorithm to estimate the Neural Tangent Kernel (NTK) associated with a broad class of quantum neural networks.<n>These networks consist of arbitrary unitary operators interleaved with parametric gates given by the time evolution generated by an arbitrary Hamiltonian.
- Score: 3.7498611358320733
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an efficient classical algorithm to estimate the Neural Tangent Kernel (NTK) associated with a broad class of quantum neural networks. These networks consist of arbitrary unitary operators belonging to the Clifford group interleaved with parametric gates given by the time evolution generated by an arbitrary Hamiltonian belonging to the Pauli group. The proposed algorithm leverages a key insight: the average over the distribution of initialization parameters in the NTK definition can be exactly replaced by an average over just four discrete values, chosen such that the corresponding parametric gates are Clifford operations. This reduction enables an efficient classical simulation of the circuit. Combined with recent results establishing the equivalence between wide quantum neural networks and Gaussian processes [Girardi \emph{et al.}, Comm. Math. Phys. 406, 92 (2025); Melchor Hernandez \emph{et al.}, arXiv:2412.03182], our method enables efficient computation of the expected output of wide, trained quantum neural networks, and therefore shows that such networks cannot achieve quantum advantage.
Related papers
- Quantum-Enhanced Weight Optimization for Neural Networks Using Grover's Algorithm [0.0]
We propose to use quantum computing in order to optimize the weights of a classical NN.<n>We design an instance of Grover's quantum search algorithm to accelerate the search for the optimal parameters of an NN.<n>Our method requires a much smaller number of qubits compared to other QNN approaches.
arXiv Detail & Related papers (2025-04-20T10:59:04Z) - Quantum Graph Convolutional Networks Based on Spectral Methods [10.250921033123152]
Graph Convolutional Networks (GCNs) are specialized neural networks for feature extraction from graph-structured data.<n>This paper introduces an enhancement to GCNs based on spectral methods by integrating quantum computing techniques.
arXiv Detail & Related papers (2025-03-09T05:08:15Z) - Challenges and opportunities in the supervised learning of quantum
circuit outputs [0.0]
Deep neural networks have proven capable of predicting some output properties of relevant random quantum circuits.
We investigate if and to what extent neural networks can learn to predict the output expectation values of circuits often employed in variational quantum algorithms.
arXiv Detail & Related papers (2024-02-07T16:10:13Z) - Universal Approximation Theorem and error bounds for quantum neural networks and quantum reservoirs [2.07180164747172]
We provide here precise error bounds for specific classes of functions and extend these results to the interesting new setup of randomised quantum circuits.<n>Our results show in particular that a quantum neural network with $mathcalO(varepsilon-2)$ weights and $mathcalO (lceil log_2(varepsilon-1) rceil)$ qubits suffices to achieve accuracy $varepsilon>0$ when approximating functions with integrable Fourier transform.
arXiv Detail & Related papers (2023-07-24T15:52:33Z) - Riemannian quantum circuit optimization for Hamiltonian simulation [2.1227079314039057]
Hamiltonian simulation is a natural application of quantum computing.
For translation invariant systems, the gates in such circuit topologies can be further optimized on classical computers.
For the Ising and Heisenberg models on a one-dimensional lattice, we achieve orders of magnitude accuracy improvements.
arXiv Detail & Related papers (2022-12-15T00:00:17Z) - Iterative Qubit Coupled Cluster using only Clifford circuits [36.136619420474766]
An ideal state preparation protocol can be characterized by being easily generated classically.
We propose a method that meets these requirements by introducing a variant of the iterative qubit coupled cluster (iQCC)
We demonstrate the algorithm's correctness in ground-state simulations and extend our study to complex systems like the titanium-based compound Ti(C5H5)(CH3)3 with a (20, 20) active space.
arXiv Detail & Related papers (2022-11-18T20:31:10Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.<n>We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.<n>The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Quantum-enhanced neural networks in the neural tangent kernel framework [0.4394730767364254]
We study a class of qcNN composed of a quantum data-encoder followed by a cNN.
In the NTK regime where the number nodes of the cNN becomes infinitely large, the output of the entire qcNN becomes a nonlinear function of the so-called projected quantum kernel.
arXiv Detail & Related papers (2021-09-08T17:16:23Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Searching for Low-Bit Weights in Quantized Neural Networks [129.8319019563356]
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
We present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.
arXiv Detail & Related papers (2020-09-18T09:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.