Parallel nonlinear neuromorphic computing with temporal encoding
- URL: http://arxiv.org/abs/2506.17261v1
- Date: Mon, 09 Jun 2025 14:55:05 GMT
- Title: Parallel nonlinear neuromorphic computing with temporal encoding
- Authors: Guangfeng You, Chao Qian, Hongsheng Chen,
- Abstract summary: We introduce a parallel nonlinear neuromorphic processor that enables arbitrary superposition of information states in multi-dimensional channels.<n>Our work opens up a flexible avenue for a variety of temporally-modulated neuromorphic processors tailored for complex scenarios.
- Score: 21.331015748341137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of deep learning applications has intensified the demand for electronic hardware with low energy consumption and fast computing speed. Neuromorphic photonics have emerged as a viable alternative to directly process high-throughput information at the physical space. However, the simultaneous attainment of high linear and nonlinear expressivity posse a considerable challenge due to the power efficiency and impaired manipulability in conventional nonlinear materials and optoelectronic conversion. Here we introduce a parallel nonlinear neuromorphic processor that enables arbitrary superposition of information states in multi-dimensional channels, only by leveraging the temporal encoding of spatiotemporal metasurfaces to map the input data and trainable weights. The proposed temporal encoding nonlinearity is theoretically proved to flexibly customize the nonlinearity, while preserving quasi-static linear transformation capability within each time partition. We experimentally demonstrated the concept based on distributed spatiotemporal metasurfaces, showcasing robust performance in multi-label recognition and multi-task parallelism with asynchronous modulation. Remarkably, our nonlinear processor demonstrates dynamic memory capability in autonomous planning tasks and real-time responsiveness to canonical maze-solving problem. Our work opens up a flexible avenue for a variety of temporally-modulated neuromorphic processors tailored for complex scenarios.
Related papers
- Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Uncovering the Functional Roles of Nonlinearity in Memory [2.315156126698557]
We go beyond performance comparisons to systematically dissect the functional role of nonlinearity in recurrent networks.<n>We use Almost Linear Recurrent Neural Networks (AL-RNNs), which allow fine-grained control over nonlinearity.<n>We find that minimal nonlinearity is not only sufficient but often optimal, yielding models that are simpler, more robust, and more interpretable than their fully nonlinear or linear counterparts.
arXiv Detail & Related papers (2025-06-09T16:32:19Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Learnable Infinite Taylor Gaussian for Dynamic View Rendering [55.382017409903305]
This paper introduces a novel approach based on a learnable Taylor Formula to model the temporal evolution of Gaussians.<n>The proposed method achieves state-of-the-art performance in this domain.
arXiv Detail & Related papers (2024-12-05T16:03:37Z) - Nonlinear Neural Dynamics and Classification Accuracy in Reservoir Computing [3.196204482566275]
We study the accuracy of a reservoir computer in artificial classification tasks of varying complexity.
We find that, even for activation functions with extremely reduced nonlinearity, weak recurrent interactions and small input signals, the reservoir is able to compute useful representations.
arXiv Detail & Related papers (2024-11-15T08:52:12Z) - Nonlinear Autoregression with Convergent Dynamics on Novel Computational
Platforms [0.0]
Reservoir computing exploits nonlinear dynamical systems for temporal information processing.
This paper introduces reservoir computers with output feedback as stationary and ergodic infinite-order nonlinear autoregressive models.
arXiv Detail & Related papers (2021-08-18T07:01:16Z) - Wave-based extreme deep learning based on non-linear time-Floquet
entanglement [0.7614628596146599]
Complex neuromorphic computing tasks, which require strong non-linearities, have so far remained out-of-reach of wave-based solutions.
Here, we demonstrate the relevance of Time-Floquet physics to induce a strong non-linear entanglement between signal inputs at different frequencies.
We prove the efficiency of the method for extreme learning machines and reservoir computing to solve a range of challenging learning tasks.
arXiv Detail & Related papers (2021-07-19T00:18:09Z) - Designing Kerr Interactions for Quantum Information Processing via
Counterrotating Terms of Asymmetric Josephson-Junction Loops [68.8204255655161]
static cavity nonlinearities typically limit the performance of bosonic quantum error-correcting codes.
Treating the nonlinearity as a perturbation, we derive effective Hamiltonians using the Schrieffer-Wolff transformation.
Results show that a cubic interaction allows to increase the effective rates of both linear and nonlinear operations.
arXiv Detail & Related papers (2021-07-14T15:11:05Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Reservoir Computing with Magnetic Thin Films [35.32223849309764]
New unconventional computing hardware has emerged with the potential to exploit natural phenomena and gain efficiency.
Physical reservoir computing demonstrates this with a variety of unconventional systems.
We perform an initial exploration of three magnetic materials in thin-film geometries via microscale simulation.
arXiv Detail & Related papers (2021-01-29T17:37:17Z) - Accelerating Simulation of Stiff Nonlinear Systems using Continuous-Time
Echo State Networks [1.1545092788508224]
We present a data-driven method for generating surrogates of nonlinear ordinary differential equations with dynamics at widely separated timescales.
We empirically demonstrate near-constant time performance using our CTESNs on a physically motivated scalable model of a heating system.
arXiv Detail & Related papers (2020-10-07T17:40:06Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.