A Neuromorphic Paradigm for Online Unsupervised Clustering
- URL: http://arxiv.org/abs/2005.04170v1
- Date: Sat, 25 Apr 2020 14:02:34 GMT
- Title: A Neuromorphic Paradigm for Online Unsupervised Clustering
- Authors: James E. Smith
- Abstract summary: A computational paradigm based on neuroscientific concepts is proposed and shown to be capable of online unsupervised clustering.
All operations, both training and inference, are localized and efficient.
The prototype column is simulated with a semi-synthetic benchmark and is shown to have performance characteristics on par with classic k-means.
- Score: 0.6091702876917281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A computational paradigm based on neuroscientific concepts is proposed and
shown to be capable of online unsupervised clustering. Because it is an online
method, it is readily amenable to streaming realtime applications and is
capable of dynamically adjusting to macro-level input changes. All operations,
both training and inference, are localized and efficient. The paradigm is
implemented as a cognitive column that incorporates five key elements: 1)
temporal coding, 2) an excitatory neuron model for inference, 3)
winner-take-all inhibition, 4) a column architecture that combines excitation
and inhibition, 5) localized training via spike timing de-pendent plasticity
(STDP). These elements are described and discussed, and a prototype column is
given. The prototype column is simulated with a semi-synthetic benchmark and is
shown to have performance characteristics on par with classic k-means.
Simulations reveal the inner operation and capabilities of the column with
emphasis on excitatory neuron response functions and STDP implementations.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - KPFlow: An Operator Perspective on Dynamic Collapse Under Gradient Descent Training of Recurrent Networks [9.512147747894026]
We show how a gradient flow can be decomposed into a product that involves two operators.<n>We show how their interplay gives rise to low-dimensional latent dynamics under GD.<n>For multi-task training, we show that the operators can be used to measure how objectives relevant to individual sub-tasks align.
arXiv Detail & Related papers (2025-07-08T20:33:15Z) - Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Generalizable, real-time neural decoding with hybrid state-space models [12.37704585793711]
We present POSSM, a novel hybrid architecture that combines individual spike tokenization via a cross-attention module with a recurrent state-space model (SSM) backbone.<n>We evaluate POSSM's decoding performance and inference speed on intracortical decoding of monkey motor tasks, and show that it extends to clinical applications.<n>In all of these tasks, we find that POSSM achieves decoding accuracy comparable to state-of-the-art Transformers, at a fraction of the inference cost.
arXiv Detail & Related papers (2025-06-05T17:57:08Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)
We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.
Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - Sparse Brains are Also Adaptive Brains: Cognitive-Load-Aware Dynamic Activation for LLMs [20.66821663739342]
CLADA is a framework that synergizes statistical sparsity with semantic adaptability.<n>It achieves textbf20% average speedup with 2% accuracy drop, outperforming Griffin (5%+ degradation) and TT (negligible speedup)
arXiv Detail & Related papers (2025-02-26T12:11:16Z) - Integrating programmable plasticity in experiment descriptions for analog neuromorphic hardware [0.9217021281095907]
The BrainScaleS-2 neuromorphic architecture has been designed to support "hybrid" plasticity.
observables that are expensive in numerical simulation, such as per-synapse correlation measurements, are implemented directly in the synapse circuits.
We introduce an integrated framework for describing spiking neural network experiments and plasticity rules in a unified high-level experiment description language.
arXiv Detail & Related papers (2024-12-04T08:46:06Z) - EulerFormer: Sequential User Behavior Modeling with Complex Vector Attention [88.45459681677369]
We propose a novel transformer variant with complex vector attention, named EulerFormer.
It provides a unified theoretical framework to formulate both semantic difference and positional difference.
It is more robust to semantic variations and possesses moresuperior theoretical properties in principle.
arXiv Detail & Related papers (2024-03-26T14:18:43Z) - Enhancing Neural Training via a Correlated Dynamics Model [2.9302545029880394]
Correlation Mode Decomposition (CMD) is an algorithm that clusters the parameter space into groups, that display synchronized behavior across epochs.
We introduce an efficient CMD variant, designed to run concurrently with training.
Our experiments indicate that CMD surpasses the state-of-the-art method for compactly modeled dynamics on image classification.
arXiv Detail & Related papers (2023-12-20T18:22:49Z) - Sparse Modular Activation for Efficient Sequence Modeling [94.11125833685583]
Recent models combining Linear State Space Models with self-attention mechanisms have demonstrated impressive results across a range of sequence modeling tasks.
Current approaches apply attention modules statically and uniformly to all elements in the input sequences, leading to sub-optimal quality-efficiency trade-offs.
We introduce Sparse Modular Activation (SMA), a general mechanism enabling neural networks to sparsely activate sub-modules for sequence elements in a differentiable manner.
arXiv Detail & Related papers (2023-06-19T23:10:02Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Comparison of Temporal Encoders for Neuromorphic Keyword Spotting with
Few Neurons [0.11726720776908518]
Two candidate neurocomputational elements for temporal encoding and feature extraction in SNNs are investigated.
Resource-efficient keyword spotting applications may benefit from the use of these encoders, but further work on methods for learning the time constants and weights is required.
arXiv Detail & Related papers (2023-01-24T12:50:54Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Hippocampus-Inspired Cognitive Architecture (HICA) for Operant
Conditioning [1.2955718209635252]
We propose a Hippocampus-Inspired Cognitive Architecture (HICA) as a neural mechanism for operant conditioning.
HICA is composed of two different types of modules.
arXiv Detail & Related papers (2022-12-16T18:00:21Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Theory of gating in recurrent neural networks [5.672132510411465]
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience.
Here, we show that gating offers flexible control of two salient features of the collective dynamics.
The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator.
arXiv Detail & Related papers (2020-07-29T13:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.