QuIRK: Quantum-Inspired Re-uploading KAN
- URL: http://arxiv.org/abs/2510.08650v2
- Date: Fri, 17 Oct 2025 20:55:12 GMT
- Title: QuIRK: Quantum-Inspired Re-uploading KAN
- Authors: Vinayak Sharma, Ashish Padhy, Lord Sen, Vijay Jagdish Karanjkar, Sourav Behera, Shyamapada Mukherjee, Aviral Shrivastava,
- Abstract summary: Kolmogorov-Arnold Networks or KANs have shown the ability to outperform classical Deep Neural Networks.<n>This paper introduces a quantum-inspired variant of the KAN based on Quantum Data Re-uploading (DR) models.
- Score: 0.8242907479435281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Kolmogorov-Arnold Networks or KANs have shown the ability to outperform classical Deep Neural Networks, while using far fewer trainable parameters for regression problems on scientific domains. Even more powerful has been their interpretability due to their structure being composed of univariate B-Spline functions. This enables us to derive closed-form equations from trained KANs for a wide range of problems. This paper introduces a quantum-inspired variant of the KAN based on Quantum Data Re-uploading (DR) models. The Quantum-Inspired Re-uploading KAN or QuIRK model replaces B-Splines with single-qubit DR models as the univariate function approximator, allowing them to match or outperform traditional KANs while using even fewer parameters. This is especially apparent in the case of periodic functions. Additionally, since the model utilizes only single-qubit circuits, it remains classically tractable to simulate with straightforward GPU acceleration. Finally, we also demonstrate that QuIRK retains the interpretability advantages and the ability to produce closed-form solutions.
Related papers
- Q-RUN: Quantum-Inspired Data Re-uploading Networks [9.564540024568245]
Data re-uploading quantum circuits (DRQC) are a key approach to implementing quantum neural networks.<n>We introduce the mathematical paradigm of DRQC into classical models by proposing a quantum-inspired data re-uploading network (Q-RUN)<n>Q-RUN retains the Fourier-expressive advantages of quantum models without any quantum hardware.
arXiv Detail & Related papers (2025-12-18T04:12:09Z) - Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks [33.03489665216646]
Variational quantum circuits (VQCs) are central to quantum machine learning.<n>Recent progress in Kolmogorov-Arnold networks (KANs) highlights the power of learnable activation functions.<n>We introduce quantum variational activation functions (QVAFs) realized through single-qubit data re-uploading circuits called DatA Re-Uploading ActivatioNs (DARUANs)
arXiv Detail & Related papers (2025-09-17T14:28:42Z) - QuKAN: A Quantum Circuit Born Machine approach to Quantum Kolmogorov Arnold Networks [2.8192469953126262]
Kolmogorov Arnold Networks (KANs) have demonstrated promising capabilities in expressing complex functions with fewer neurons.<n>We present an implementation of these KAN architectures in both hybrid and fully quantum forms using a Quantum Circuit Born Machine (QCBM)<n>We demonstrate the feasibility, interpretability and performance of the proposed Quantum KAN (QuKAN) architecture.
arXiv Detail & Related papers (2025-06-27T15:51:19Z) - Probing Quantum Spin Systems with Kolmogorov-Arnold Neural Network Quantum States [0.0]
We propose textttSineKAN, a neural network model to represent quantum mechanical wave functions.<n>We find that textttSineKAN models can be trained to high precisions and accuracies with minimal computational costs.
arXiv Detail & Related papers (2025-06-02T17:18:40Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Are Quantum Circuits Better than Neural Networks at Learning
Multi-dimensional Discrete Data? An Investigation into Practical Quantum
Circuit Generative Models [0.0]
We show that multi-layer parameterized quantum circuits (MPQCs) are more expressive than classical neural networks (NNs)
We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically.
We address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the gradient (quantum) and how to alleviate modal collapse.
arXiv Detail & Related papers (2022-12-13T05:31:31Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Neural Stochastic Partial Differential Equations [1.2183405753834562]
We introduce the Neural SPDE model providing an extension to two important classes of physics-inspired neural architectures.
On the one hand, it extends all the popular neural -- ordinary, controlled, rough -- differential equation models in that it is capable of processing incoming information.
On the other hand, it extends Neural Operators -- recent generalizations of neural networks modelling mappings between functional spaces -- in that it can be used to learn complex SPDE solution operators.
arXiv Detail & Related papers (2021-10-19T20:35:37Z) - Rapid training of deep neural networks without skip connections or
normalization layers using Deep Kernel Shaping [46.083745557823164]
We identify the main pathologies present in deep networks that prevent them from training fast and generalizing to unseen data.
We show how these can be avoided by carefully controlling the "shape" of the network's kernel function.
arXiv Detail & Related papers (2021-10-05T00:49:36Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.