Degree-Optimized Cumulative Polynomial Kolmogorov-Arnold Networks
- URL: http://arxiv.org/abs/2505.15228v1
- Date: Wed, 21 May 2025 07:59:12 GMT
- Title: Degree-Optimized Cumulative Polynomial Kolmogorov-Arnold Networks
- Authors: Mathew Vanherreweghe, Lirandë Pira, Patrick Rebentrost,
- Abstract summary: Kolmogorov-Arnold networks (CP-KAN) is a neural architecture combining Chebyshev basis functions and quadratic unconstrained binary optimization (QUBO)<n>Our contribution involves reformulating the degree selection problem as a QUBO task, reducing the complexity from $O($N) to a single optimization step per layer.<n>The architecture performs well in regression tasks with limited data, showing good robustness to input scales and natural regularization properties from its basis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce cumulative polynomial Kolmogorov-Arnold networks (CP-KAN), a neural architecture combining Chebyshev polynomial basis functions and quadratic unconstrained binary optimization (QUBO). Our primary contribution involves reformulating the degree selection problem as a QUBO task, reducing the complexity from $O(D^N)$ to a single optimization step per layer. This approach enables efficient degree selection across neurons while maintaining computational tractability. The architecture performs well in regression tasks with limited data, showing good robustness to input scales and natural regularization properties from its polynomial basis. Additionally, theoretical analysis establishes connections between CP-KAN's performance and properties of financial time series. Our empirical validation across multiple domains demonstrates competitive performance compared to several traditional architectures tested, especially in scenarios where data efficiency and numerical stability are important. Our implementation, including strategies for managing computational overhead in larger networks is available in Ref.~\citep{cpkan_implementation}.
Related papers
- DeePoly: A High-Order Accuracy Scientific Machine Learning Framework for Function Approximation and Solving PDEs [5.483488375189695]
This work introduces a novel framework that transforms the Dee solution to a two-stage approach.<n>The strategic combination leverages the strengths of both methods.<n>This approach also serves as the open-source project also serves as the paper.
arXiv Detail & Related papers (2025-06-05T04:10:52Z) - Performance Analysis of Convolutional Neural Network By Applying Unconstrained Binary Quadratic Programming [0.0]
Convolutional Neural Networks (CNNs) are pivotal in computer vision and Big Data analytics but demand significant computational resources when trained on large-scale datasets.<n>We propose a hybrid optimization method that combines Unconstrained Binary Quadratic Programming (UBQP) with Gradient Descent (SGD) to accelerate CNN training.<n>Our approach achieves a 10--15% accuracy improvement over a standard BP-CNN baseline while maintaining similar execution times.
arXiv Detail & Related papers (2025-05-30T21:25:31Z) - A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)<n>The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - Stochastic Q-learning for Large Discrete Action Spaces [79.1700188160944]
In complex environments with discrete action spaces, effective decision-making is critical in reinforcement learning (RL)
We present value-based RL approaches which, as opposed to optimizing over the entire set of $n$ actions, only consider a variable set of actions, possibly as small as $mathcalO(log(n)$)$.
The presented value-based RL methods include, among others, Q-learning, StochDQN, StochDDQN, all of which integrate this approach for both value-function updates and action selection.
arXiv Detail & Related papers (2024-05-16T17:58:44Z) - Chebyshev Polynomial-Based Kolmogorov-Arnold Networks: An Efficient Architecture for Nonlinear Function Approximation [0.0]
This paper presents the Chebyshev Kolmogorov-Arnold Network (Chebyshev KAN), a new neural network architecture inspired by the Kolmogorov-Arnold theorem.
By utilizing learnable functions parametrized by Chebyshevs on the network's edges, Chebyshev KANs enhance flexibility, efficiency, and interpretability in function approximation tasks.
arXiv Detail & Related papers (2024-05-12T07:55:43Z) - Partially Stochastic Infinitely Deep Bayesian Neural Networks [0.0]
We present a novel family of architectures that integrates partiality into the framework of infinitely deep neural networks.
We leverage the advantages of partiality in the infinite-depth limit which include the benefits of fullity.
We present a variety of architectural configurations, offering flexibility in network design.
arXiv Detail & Related papers (2024-02-05T20:15:19Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization
of Polynomial Activation Neural Networks in Fully Polynomial-Time [31.94590517036704]
We develop exact convex optimization formulations for two-layer numerical networks with second degree activations.
We show that semidefinite neural and therefore global optimization is in complexity dimension and sample size for all input data.
The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
arXiv Detail & Related papers (2021-01-07T08:43:01Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.