Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals
- URL: http://arxiv.org/abs/2110.07046v1
- Date: Wed, 13 Oct 2021 21:51:56 GMT
- Title: Deep Metric Learning with Locality Sensitive Angular Loss for
Self-Correcting Source Separation of Neural Spiking Signals
- Authors: Alexander Kenneth Clarke and Dario Farina
- Abstract summary: We propose a methodology based on deep metric learning to address the need for automated post-hoc cleaning and robust separation filters.
We validate this method with an artificially corrupted label set based on source-separated high-density surface electromyography recordings.
This approach enables a neural network to learn to accurately decode neurophysiological time series using any imperfect method of labelling the signal.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Neurophysiological time series, such as electromyographic signal and
intracortical recordings, are typically composed of many individual spiking
sources, the recovery of which can give fundamental insights into the
biological system of interest or provide neural information for man-machine
interfaces. For this reason, source separation algorithms have become an
increasingly important tool in neuroscience and neuroengineering. However, in
noisy or highly multivariate recordings these decomposition techniques often
make a large number of errors, which degrades human-machine interfacing
applications and often requires costly post-hoc manual cleaning of the output
label set of spike timestamps. To address both the need for automated post-hoc
cleaning and robust separation filters we propose a methodology based on deep
metric learning, using a novel loss function which maintains intra-class
variance, creating a rich embedding space suitable for both label cleaning and
the discovery of new activations. We then validate this method with an
artificially corrupted label set based on source-separated high-density surface
electromyography recordings, recovering the original timestamps even in extreme
degrees of feature and class-dependent label noise. This approach enables a
neural network to learn to accurately decode neurophysiological time series
using any imperfect method of labelling the signal.
Related papers
- Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - A Comparison of Temporal Encoders for Neuromorphic Keyword Spotting with
Few Neurons [0.11726720776908518]
Two candidate neurocomputational elements for temporal encoding and feature extraction in SNNs are investigated.
Resource-efficient keyword spotting applications may benefit from the use of these encoders, but further work on methods for learning the time constants and weights is required.
arXiv Detail & Related papers (2023-01-24T12:50:54Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Hybrid Artifact Detection System for Minute Resolution Blood Pressure
Signals from ICU [1.8374319565577155]
This paper investigates the utilization of a hybrid artifact detection system that combines a Variational Autoencoder with a statistical detection component for the labeling of artifactual samples.
Our preliminary results indicate that the system is capable of consistently achieving sensitivity and specificity levels that surpass 90%.
arXiv Detail & Related papers (2022-03-11T14:36:52Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.