Limitations in odour recognition and generalisation in a neuromorphic
olfactory circuit
- URL: http://arxiv.org/abs/2309.11555v1
- Date: Wed, 20 Sep 2023 18:00:05 GMT
- Title: Limitations in odour recognition and generalisation in a neuromorphic
olfactory circuit
- Authors: Nik Dennler, Andr\'e van Schaik, Michael Schmuker
- Abstract summary: We present an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb.
They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases.
- Score: 0.07589017023705934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuromorphic computing is one of the few current approaches that have the
potential to significantly reduce power consumption in Machine Learning and
Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm
that runs on a neuromorphic architecture and is inspired by circuits described
in the mammalian olfactory bulb. They assess the algorithm's performance in
"rapid online learning and identification" of gaseous odorants and odorless
gases (short "gases") using a set of gas sensor recordings of different odour
presentations and corrupting them by impulse noise. We replicated parts of the
study and discovered limitations that affect some of the conclusions drawn.
First, the dataset used suffers from sensor drift and a non-randomised
measurement protocol, rendering it of limited use for odour identification
benchmarks. Second, we found that the model is restricted in its ability to
generalise over repeated presentations of the same gas. We demonstrate that the
task the study refers to can be solved with a simple hash table approach,
matching or exceeding the reported results in accuracy and runtime. Therefore,
a validation of the model that goes beyond restoring a learned data sample
remains to be shown, in particular its suitability to odour identification
tasks.
Related papers
- Neuromorphic circuit for temporal odor encoding in turbulent environments [0.48748194765816943]
We investigate Metal-Oxide (MOx) gas sensor recordings of constant airflow-embedded artificial odor plumes.
We design a neuromorphic electronic nose front-end circuit for extracting and encoding this feature into analog spikes for gas detection and concentration estimation.
The resulting neuromorphic nose could enable data-efficient, real-time robotic plume navigation systems.
arXiv Detail & Related papers (2024-12-28T11:12:18Z) - Neuromorphic Auditory Perception by Neural Spiketrum [27.871072042280712]
We introduce a neural spike coding model called spiketrumtemporal, to transform the time-varying analog signals into efficient spike patterns.
The model provides a sparse and efficient coding scheme with precisely controllable spike rate that facilitates training of spiking neural networks in various auditory perception tasks.
arXiv Detail & Related papers (2023-09-11T13:06:19Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Neuro-BERT: Rethinking Masked Autoencoding for Self-supervised Neurological Pretraining [24.641328814546842]
We present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain.
We propose a novel pre-training task dubbed Fourier Inversion Prediction (FIP), which randomly masks out a portion of the input signal and then predicts the missing information.
By evaluating our method on several benchmark datasets, we show that Neuro-BERT improves downstream neurological-related tasks by a large margin.
arXiv Detail & Related papers (2022-04-20T16:48:18Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.