A Unified Frequency Principle for Quantum and Classical Machine Learning
- URL: http://arxiv.org/abs/2601.03169v1
- Date: Tue, 06 Jan 2026 16:44:22 GMT
- Title: A Unified Frequency Principle for Quantum and Classical Machine Learning
- Authors: Rundi Lu, Ruiqi Zhang, Weikang Li, Zhaohui Wei, Dong-Ling Deng, Zhengwei Liu,
- Abstract summary: We present a unified theoretical framework for the frequency principle (F-principle) that characterizes the training dynamics of quantum neural networks.<n>Within this framework, we prove that quantum neural networks exhibit a spectral bias toward learning low-frequency components of target functions.<n>Our results provide a frequency-domain lens that unifies classical and quantum learning dynamics, clarifies the role of noise in shaping trainability, and guides the design of noise-resilient quantum neural networks.
- Score: 9.529771617722703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural networks constitute a key class of near-term quantum learning models, yet their training dynamics remain not fully understood. Here, we present a unified theoretical framework for the frequency principle (F-principle) that characterizes the training dynamics of both classical and quantum neural networks. Within this framework, we prove that quantum neural networks exhibit a spectral bias toward learning low-frequency components of target functions, mirroring the behavior observed in classical deep networks. We further analyze the impact of noise and show that, when single-qubit noise is applied after encoding-layer rotations and modeled as a Pauli channel aligned with the rotation axis, the Fourier component labeled by $\boldsymbolω$ is suppressed by a factor $(1-2γ)^{\|\boldsymbolω\|_1}$. This leads to exponential attenuation of high-frequency terms while preserving the learnability of low-frequency structure. In the same setting, we establish that the resulting noisy circuits admit efficient classical simulation up to average-case error. Numerical experiments corroborate our theoretical predictions: Quantum neural networks primarily learn low-frequency features during early optimization and maintain robustness against dephasing and depolarizing noise acting on the encoding layer. Our results provide a frequency-domain lens that unifies classical and quantum learning dynamics, clarifies the role of noise in shaping trainability, and guides the design of noise-resilient quantum neural networks.
Related papers
- Quantum Noise Tomography with Physics-Informed Neural Networks [0.15229257192293197]
We introduce a novel framework for performing Lindblad tomography using Physics-Informed Neural Networks.<n>Our method produces a fully-differentiable digital twin of a noisy quantum system by learning its governing master equation.
arXiv Detail & Related papers (2025-09-15T13:30:50Z) - Benchmarking a Tunable Quantum Neural Network on Trapped-Ion and Superconducting Hardware [0.0]
We implement a quantum generalization of a network on trapped-ion and IBM superconducting quantum computers.<n>The network feedforward involves qubit rotations whose angles depend on the results of measurements in the previous layer.<n>We benchmark physical noise by inserting additional single-qubit and two-qubit gate pairs into the neural network circuits.
arXiv Detail & Related papers (2025-07-28T18:00:03Z) - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning [50.95799256262098]
Variational quantum circuits (VQCs) hold promise for quantum machine learning but face challenges in expressivity, trainability, and noise resilience.<n>We propose VQC-MLPNet, a hybrid architecture where a VQC generates the first-layer weights of a classical multilayer perceptron during training, while inference is performed entirely classically.
arXiv Detail & Related papers (2025-06-12T01:38:15Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks [1.5124439914522694]
We introduce a theoretical framework that explains the capacity property of sinusoidal networks.<n>We show how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies.<n>Our method, referred to as TUNER, greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions.
arXiv Detail & Related papers (2024-07-30T18:24:46Z) - Synergy between noisy quantum computers and scalable classical deep learning [0.4999814847776097]
We investigate the potential of combining the computational power of noisy quantum computers and classical scalable convolutional neural networks (CNNs)
The goal is to accurately predict exact expectation values of parameterized quantum circuits representing the Trotter-decomposed dynamics of quantum Ising models.
Thanks to the quantum information, our CNNs succeed even when supervised learning based only on classical descriptors fails.
arXiv Detail & Related papers (2024-04-11T14:47:18Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Learning the ground state of a non-stoquastic quantum Hamiltonian in a
rugged neural network landscape [0.0]
We investigate a class of universal variational wave-functions based on artificial neural networks.
In particular, we show that in the present setup the neural network expressivity and Monte Carlo sampling are not primary limiting factors.
arXiv Detail & Related papers (2020-11-23T05:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.