Ultrafast On-chip Online Learning via Spline Locality in Kolmogorov-Arnold Networks
- URL: http://arxiv.org/abs/2602.02056v1
- Date: Mon, 02 Feb 2026 12:57:15 GMT
- Title: Ultrafast On-chip Online Learning via Spline Locality in Kolmogorov-Arnold Networks
- Authors: Duc Hoang, Aarush Gupta, Philip Harris,
- Abstract summary: Ultrafast online learning is essential for high-frequency systems, such as controls for quantum computing and nuclear fusion.<n>Meeting these requirements demands low-latency, fixed-precision computation under strict memory constraints.<n>We identify key properties of Kolmogorov-Arnold Networks (KANs) that align with these constraints.<n>This work is the first to demonstrate model-free online learning at sub-microsecond latencies.
- Score: 2.3420342129506424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ultrafast online learning is essential for high-frequency systems, such as controls for quantum computing and nuclear fusion, where adaptation must occur on sub-microsecond timescales. Meeting these requirements demands low-latency, fixed-precision computation under strict memory constraints, a regime in which conventional Multi-Layer Perceptrons (MLPs) are both inefficient and numerically unstable. We identify key properties of Kolmogorov-Arnold Networks (KANs) that align with these constraints. Specifically, we show that: (i) KAN updates exploiting B-spline locality are sparse, enabling superior on-chip resource scaling, and (ii) KANs are inherently robust to fixed-point quantization. By implementing fixed-point online training on Field-Programmable Gate Arrays (FPGAs), a representative platform for on-chip computation, we demonstrate that KAN-based online learners are significantly more efficient and expressive than MLPs across a range of low-latency and resource-constrained tasks. To our knowledge, this work is the first to demonstrate model-free online learning at sub-microsecond latencies.
Related papers
- When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training [58.25341036646294]
We analytically examine why learning recurrent poles does not provide tangible benefits in data and empirically offer real-time learning scenarios.<n>We show that fixed-pole networks achieve superior performance with lower training complexity, making them more suitable for online real-time tasks.
arXiv Detail & Related papers (2026-02-25T00:15:13Z) - Deep Hierarchical Learning with Nested Subspace Networks [53.71337604556311]
We propose Nested Subspace Networks (NSNs) for large neural networks.<n>NSNs enable a single model to be dynamically and granularly adjusted across a continuous spectrum of compute budgets.<n>We show that NSNs can be surgically applied to pre-trained LLMs and unlock a smooth and predictable compute-performance frontier.
arXiv Detail & Related papers (2025-09-22T15:13:14Z) - Traces Propagation: Memory-Efficient and Scalable Forward-Only Learning in Spiking Neural Networks [1.6952253597549973]
Spiking Neural Networks (SNNs) provide an efficient framework for processing dynamic-temporal signals.<n>A key challenge in training SNNs is to solve to both spatial and temporal credit assignment.
arXiv Detail & Related papers (2025-09-16T13:11:52Z) - Reinforcement Learning for Quantum Network Control with Application-Driven Objectives [53.03367590211247]
Dynamic programming and reinforcement learning offer promising tools for optimizing control strategies.<n>We propose a novel RL framework that directly optimize non-linear, differentiable objective functions.<n>Our work comprises the first step towards non-linear objective function optimization in quantum networks with RL, opening a path towards more advanced use cases.
arXiv Detail & Related papers (2025-09-12T18:41:10Z) - LCQNN: Linear Combination of Quantum Neural Networks [7.010027035873597]
We introduce the Linear Combination of Quantum Neural Networks (LCQNN) framework, which uses the linear combination of unitaries concept to create a tunable design.<n>We show how specific structural choices, such as adopting $k$ of control unitaries or restricting the model to certain group-theoretic subspaces, prevent gradients from collapsing.<n>In group action scenarios, we show that by exploiting symmetry and excluding exponentially large irreducible subspaces, the model circumvents barren plateaus.
arXiv Detail & Related papers (2025-07-03T17:43:10Z) - OLALa: Online Learned Adaptive Lattice Codes for Heterogeneous Federated Learning [24.595304301100047]
Federated learning (FL) enables collaborative training across distributed clients without sharing raw data.<n>We propose Online Learned Adaptive Lattices (OLALa), a heterogeneous FL framework where each client can adjust its quantizer online.<n>OLALa consistently improves learning performance under various quantization rates, outperforming conventional fixed-codebook and non-adaptive schemes.
arXiv Detail & Related papers (2025-06-25T10:18:34Z) - FL-QDSNNs: Federated Learning with Quantum Dynamic Spiking Neural Networks [2.5435687567731926]
We present Federated Learning-Quantum Dynamic Spiking Neural Networks (FL-QDSNNs)<n>FL-QDSNNs are a privacy-preserving framework that maintains high predictive accuracy on non-IID client data.<n>Its key innovation is a dynamic-threshold spiking mechanism that triggers quantum gates only when local data drift requires added expressiveness.
arXiv Detail & Related papers (2024-12-03T09:08:33Z) - Want to train KANS at scale? Now UKAN! [2.9666099400348607]
We present Unbounded Kolmogorov-Arnold Networks (UKANs), a method that removes the need for bounded grids in traditional Kolmogorov-Arnold Networks (KANs)<n>UKANs couple multilayer perceptrons with KANs by feeding the positional encoding of grid groups into the CG model, enabling function approximation on unbounded domains without requiring data normalization.
arXiv Detail & Related papers (2024-08-20T21:20:38Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.