Bridging Function Approximation and Device Physics via Negative Differential Resistance Networks
- URL: http://arxiv.org/abs/2510.23638v1
- Date: Fri, 24 Oct 2025 15:38:22 GMT
- Title: Bridging Function Approximation and Device Physics via Negative Differential Resistance Networks
- Authors: Songyuan Li, Teng Wang, Jinrong Tang, Ruiqi Liu, Yuyao Lu, Feng Xu, Bin Gao, Xiangwei Zhu,
- Abstract summary: KANalogue is a fully analogue implementation of Kolmogorov-Arnold Networks (KANs) using negative differential resistance devices.<n>By leveraging the intrinsic negative differential resistance characteristics of tunnel diodes fabricated from NbSi2N4/HfSi2N4 heterostructures, we construct coordinate-wise nonlinearities.<n>Our results demonstrate that KANalogue can approximate complex functions with minimal parameters while maintaining classification accuracy competitive with digital baselines.
- Score: 17.356504841908826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving fully analog neural computation requires hardware that can natively implement both linear and nonlinear operations with high efficiency. While analogue matrix-vector multiplication has advanced via compute-in-memory architectures, nonlinear activation functions remain a bottleneck, often requiring digital or hybrid solutions. Inspired by the Kolmogorov-Arnold framework, we propose KANalogue, a fully analogue implementation of Kolmogorov-Arnold Networks (KANs) using negative differential resistance devices as physical realizations of learnable univariate basis functions. By leveraging the intrinsic negative differential resistance characteristics of tunnel diodes fabricated from NbSi2N4/HfSi2N4 heterostructures, we construct coordinate-wise nonlinearities with distinct curvature and support profiles. We extract I-V data from fabricated armchair and zigzag devices, fit high-order polynomials to emulate diode behavior in software, and train KANs on vision benchmarks using these learned basis functions. Our results demonstrate that KANalogue can approximate complex functions with minimal parameters while maintaining classification accuracy competitive with digital baselines. This work bridges device-level physics and function approximation theory, charting a path toward scalable, energy-efficient analogue machine learning systems.
Related papers
- Neural-POD: A Plug-and-Play Neural Operator Framework for Infinite-Dimensional Functional Nonlinear Proper Orthogonal Decomposition [7.1950116347185995]
We propose the Neural Proper Orthogonal Decomposition (Neural-POD), a plug-and-play neural operator framework.<n>Neural-POD formulates basis construction as a sequence of residual minimization problems solved through neural network training.<n>We demonstrate the robustness of Neural-POD with different complex resolutions, including the Burgers' and NavierStokes equations.
arXiv Detail & Related papers (2026-02-17T15:01:40Z) - Physical Analog Kolmogorov-Arnold Networks based on Reconfigurable Nonlinear-Processing Units [0.0]
Kolmogorov-Arnold Networks (KANs) shift neural computation from linear layers to learnable nonlinear edge functions.<n>Here we introduce a physical analog KAN architecture in which edge functions are realized in materia using reconfigurable nonlinear-processing units (RNPUs)<n>We establish a realistic system-level hardware implementation that enables compact KAN-style regression and classification with programmable nonlinear transformations.
arXiv Detail & Related papers (2026-02-07T12:33:11Z) - Learning Nonlinear Heterogeneity in Physical Kolmogorov-Arnold Networks [0.7743990032107954]
Physical neural networks typically train linear synaptic weights while treating device nonlinearities as fixed.<n>We show the opposite - by training the synaptic nonlinearity itself, we yield markedly higher task performance per physical resource.<n>We experimentally realise physical KANs in silicon-on-insulator devices we term 'Synaptic Elements'
arXiv Detail & Related papers (2026-01-20T16:00:26Z) - DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids [73.28614344779076]
We present a differentiable grid-based representation for efficiently solving differential equations (DEs)<n>Our results demonstrate a 5-20x speed-up over coordinate-based methods, solving differential equations in seconds or minutes while maintaining comparable accuracy and compactness.
arXiv Detail & Related papers (2026-01-15T18:59:57Z) - Gradient Descent as a Perceptron Algorithm: Understanding Dynamics and Implicit Acceleration [67.12978375116599]
We show that the steps of gradient descent (GD) reduce to those of generalized perceptron algorithms.<n>This helps explain the optimization dynamics and the implicit acceleration phenomenon observed in neural networks.
arXiv Detail & Related papers (2025-12-12T14:16:35Z) - A Fully Hardware Implemented Accelerator Design in ReRAM Analog Computing without ADCs [5.6496088684920345]
ReRAM-based accelerators process neural networks via analog Computing-in-Memory (CiM) for ultra-high energy efficiency.<n>This work explores the hardware implementation of the Sigmoid and SoftMax activation functions of neural networks with crossbarally binarized neurons.<n>We propose a complete ReRAM-based Analog Computing Accelerator (RACA) that accelerates neural network computation by leveraging inferenceally binarized neurons.
arXiv Detail & Related papers (2024-12-27T09:38:19Z) - DimINO: Dimension-Informed Neural Operator Learning [41.37905663176428]
DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Accelerating Fractional PINNs using Operational Matrices of Derivative [0.24578723416255746]
This paper presents a novel operational matrix method to accelerate the training of fractional Physics-Informed Neural Networks (fPINNs)
Our approach involves a non-uniform discretization of the fractional Caputo operator, facilitating swift computation of fractional derivatives within Caputo-type fractional differential problems with $0alpha1$.
The effectiveness of our proposed method is validated across diverse differential equations, including Delay Differential Equations (DDEs) and Systems of Differential Algebraic Equations (DAEs)
arXiv Detail & Related papers (2024-01-25T11:00:19Z) - Nonlinear functional regression by functional deep neural network with kernel embedding [18.927592350748682]
We introduce a functional deep neural network with an adaptive and discretization-invariant dimension reduction method.<n>Explicit rates of approximating nonlinear smooth functionals across various input function spaces are derived.<n>We conduct numerical experiments on both simulated and real datasets to demonstrate the effectiveness and benefits of our functional net.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Implementing Neural Network-Based Equalizers in a Coherent Optical
Transmission System Using Field-Programmable Gate Arrays [3.1543509940301946]
We show the offline FPGA realization of both recurrent and feedforward neural network (NN)-based equalizers for nonlinearity compensation in coherent optical transmission systems.
The main results are divided into three parts: a performance comparison, an analysis of how activation functions are implemented, and a report on the complexity of the hardware.
arXiv Detail & Related papers (2022-12-09T07:28:45Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.