LSS-SKAN: Efficient Kolmogorov-Arnold Networks based on Single-Parameterized Function
- URL: http://arxiv.org/abs/2410.14951v1
- Date: Sat, 19 Oct 2024 02:44:35 GMT
- Title: LSS-SKAN: Efficient Kolmogorov-Arnold Networks based on Single-Parameterized Function
- Authors: Zhijie Chen, Xinglin Zhang,
- Abstract summary: Kolmogorov-Arnold Networks (KAN) networks have attracted increasing attention due to their advantage of high visualizability.
We propose a superior KAN termed SKAN, where the basis function utilizes only a single learnable parameter.
LSS-SKAN exhibited superior performance on the MNIST dataset compared to all tested pure KAN variants.
- Score: 4.198997497722401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently proposed Kolmogorov-Arnold Networks (KAN) networks have attracted increasing attention due to their advantage of high visualizability compared to MLP. In this paper, based on a series of small-scale experiments, we proposed the Efficient KAN Expansion Principle (EKE Principle): allocating parameters to expand network scale, rather than employing more complex basis functions, leads to more efficient performance improvements in KANs. Based on this principle, we proposed a superior KAN termed SKAN, where the basis function utilizes only a single learnable parameter. We then evaluated various single-parameterized functions for constructing SKANs, with LShifted Softplus-based SKANs (LSS-SKANs) demonstrating superior accuracy. Subsequently, extensive experiments were performed, comparing LSS-SKAN with other KAN variants on the MNIST dataset. In the final accuracy tests, LSS-SKAN exhibited superior performance on the MNIST dataset compared to all tested pure KAN variants. Regarding execution speed, LSS-SKAN outperformed all compared popular KAN variants. Our experimental codes are available at https://github.com/chikkkit/LSS-SKAN and SKAN's Python library (for quick construction of SKAN in python) codes are available at https://github.com/chikkkit/SKAN .
Related papers
- LeanKAN: A Parameter-Lean Kolmogorov-Arnold Network Layer with Improved Memory Efficiency and Convergence Behavior [0.0]
The Kolmogorov-Arnold network (KAN) is a promising alternative to multi-layer perceptrons (MLPs) for data-driven modeling.
Here, we find that MultKAN layers suffer from limited applicability in output layers.
We propose LeanKANs, a direct and modular replacement for MultKAN and traditional AddKAN layers.
arXiv Detail & Related papers (2025-02-25T04:43:41Z) - LArctan-SKAN: Simple and Efficient Single-Parameterized Kolmogorov-Arnold Networks using Learnable Trigonometric Function [4.198997497722401]
Three new SKAN variants are developed: LSin-SKAN, LCos-SKAN, and LArctan-SKAN.
LArctan-SKAN excels in both accuracy and computational efficiency.
Results confirm the effectiveness and potential of SKANs constructed with trigonometric functions.
arXiv Detail & Related papers (2024-10-25T07:41:56Z) - Incorporating Arbitrary Matrix Group Equivariance into KANs [69.30866522377694]
We propose Equivariant Kolmogorov-Arnold Networks (EKAN), a method for incorporating arbitrary matrix group equivariants into KANs.
EKAN achieves higher accuracy with smaller datasets or fewer parameters on symmetry-related tasks, such as particle scattering and the three-body problem.
arXiv Detail & Related papers (2024-10-01T06:34:58Z) - A preliminary study on continual learning in computer vision using Kolmogorov-Arnold Networks [43.70716358136333]
Kolmogorov- Networks (KAN) are based on a fundamentally different mathematical framework.
KANs address several major issues insio, such as forgetting in continual learning scenarios.
We extend the investigation by evaluating the performance of KANs in continual learning tasks within computer vision.
arXiv Detail & Related papers (2024-09-20T14:49:21Z) - FC-KAN: Function Combinations in Kolmogorov-Arnold Networks [48.39771439237495]
We introduce FC-KAN, a Kolmogorov-Arnold Network (KAN) that leverages popular mathematical functions on low-dimensional data.
We compare FC-KAN with multi-layer perceptron network (MLP) and other existing KANs, such as BSRBF-KAN, EfficientKAN, FastKAN, and FasterKAN.
A variant of FC-KAN, which uses a combination of outputs from B-splines and Difference of Gaussians (DoG) in the form of a quadratic function, outperformed all other models on the average of 5 independent training runs.
arXiv Detail & Related papers (2024-09-03T10:16:43Z) - Activation Space Selectable Kolmogorov-Arnold Networks [29.450377034478933]
Kolmogorov-Arnold Network (KAN), based on nonlinear additive connections, has been proven to achieve performance comparable to Select-based methods.
Despite this potential, the use of a single activation function space results in reduced performance of KAN and related works across different tasks.
This work contributes to the understanding of the data-centric design of new AI and provides a foundational reference for innovations in KAN-based network architectures.
arXiv Detail & Related papers (2024-08-15T11:34:05Z) - KAN we improve on HEP classification tasks? Kolmogorov-Arnold Networks applied to an LHC physics example [0.08192907805418582]
Kolmogorov-Arnold Networks (KANs) have been proposed as an alternative to multilayer perceptrons.
We study a typical binary event classification task in high-energy physics.
We find that the learned activation functions of a one-layer KAN resemble the log-likelihood ratio of the input features.
arXiv Detail & Related papers (2024-08-05T18:01:07Z) - Kolmogorov-Arnold Network for Satellite Image Classification in Remote Sensing [4.8951183832371]
We propose the first approach for integrating the Kolmogorov-Arnold Network (KAN) with pre-trained Convolutional Neural Network (CNN) models for remote sensing scene classification tasks.
Our novel methodology, named KCN, aims to replace traditional Multi-Layer Perceptrons (MLPs) with KAN to enhance classification performance.
We employed multiple CNN-based models, including VGG16, MobileNetV2, EfficientNet, ConvNeXt, ResNet101, and Vision Transformer (ViT), and evaluated their performance when paired with KAN.
arXiv Detail & Related papers (2024-06-02T03:11:37Z) - HOPE for a Robust Parameterization of Long-memory State Space Models [51.66430224089725]
State-space models (SSMs) that utilize linear, time-invariant (LTI) systems are known for their effectiveness in learning long sequences.
We develop a new parameterization scheme, called HOPE, for LTI systems that utilize Markov parameters within Hankel operators.
Our new parameterization endows the SSM with non-decaying memory within a fixed time window, which is empirically corroborated by a sequential CIFAR-10 task with padded noise.
arXiv Detail & Related papers (2024-05-22T20:20:14Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - SLLEN: Semantic-aware Low-light Image Enhancement Network [92.80325772199876]
We develop a semantic-aware LLE network (SSLEN) composed of a LLE main-network (LLEmN) and a SS auxiliary-network (SSaN)
Unlike currently available approaches, the proposed SLLEN is able to fully lever the semantic information, e.g., IEF, HSF, and SS dataset, to assist LLE.
Comparisons between the proposed SLLEN and other state-of-the-art techniques demonstrate the superiority of SLLEN with respect to LLE quality.
arXiv Detail & Related papers (2022-11-21T15:29:38Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.