Predicting the Transition from Short-term to Long-term Memory based on
Deep Neural Network
- URL: http://arxiv.org/abs/2012.03510v1
- Date: Mon, 7 Dec 2020 08:00:35 GMT
- Title: Predicting the Transition from Short-term to Long-term Memory based on
Deep Neural Network
- Authors: Gi-Hwan Shin, Young-Seok Kweon, Minji Lee
- Abstract summary: We aim to predict long-term memory using deep neural networks.
spectral power of EEG signals of remembered items in short-term memory was calculated.
We show that long-term memory can be predicted with measured EEG signals during short-term memory.
- Score: 1.4502611532302039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memory is an essential element in people's daily life based on experience. So
far, many studies have analyzed electroencephalogram (EEG) signals at encoding
to predict later remembered items, but few studies have predicted long-term
memory only with EEG signals of successful short-term memory. Therefore, we aim
to predict long-term memory using deep neural networks. In specific, the
spectral power of the EEG signals of remembered items in short-term memory was
calculated and inputted to the multilayer perceptron (MLP) and convolutional
neural network (CNN) classifiers to predict long-term memory. Seventeen
participants performed visuo-spatial memory task consisting of picture and
location memory in the order of encoding, immediate retrieval (short-term
memory), and delayed retrieval (long-term memory). We applied
leave-one-subject-out cross-validation to evaluate the predictive models. As a
result, the picture memory showed the highest kappa-value of 0.19 on CNN, and
location memory showed the highest kappa-value of 0.32 in MLP. These results
showed that long-term memory can be predicted with measured EEG signals during
short-term memory, which improves learning efficiency and helps people with
memory and cognitive impairments.
Related papers
- Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory [66.88278207591294]
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
PANM integrates an external neural memory that uses novel physical addresses and pointer manipulation techniques to mimic human and computer symbol processing abilities.
arXiv Detail & Related papers (2024-04-18T03:03:46Z) - Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture [5.9360953869782325]
We present Memoria, a memory system for artificial neural networks.
Results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification.
Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.
arXiv Detail & Related papers (2023-10-04T09:40:46Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking
Neural Networks [9.585985556876537]
Non-linear activation of Leaky-Integrate-and-Fire (LIF) neurons requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes.
We propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels.
Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to 4.3X forward memory efficiency and 21.9X backward memory efficiency for LIF neurons.
arXiv Detail & Related papers (2023-05-26T22:55:26Z) - Sequential Memory with Temporal Predictive Coding [6.228559238589584]
We propose a PC-based model for emphsequential memory, called emphtemporal predictive coding (tPC)
We show that our tPC models can memorize and retrieve sequential inputs accurately with a biologically plausible neural implementation.
arXiv Detail & Related papers (2023-05-19T20:03:31Z) - Evaluating Long-Term Memory in 3D Mazes [10.224858246626171]
Memory Maze is a 3D domain of randomized mazes designed for evaluating long-term memory in agents.
Unlike existing benchmarks, Memory Maze measures long-term memory separate from confounding agent abilities.
We find that current algorithms benefit from training with truncated backpropagation through time and succeed on small mazes, but fall short of human performance on the large mazes.
arXiv Detail & Related papers (2022-10-24T16:32:28Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic
network [52.77024349608834]
We show how a piece of information can be maintained as a robust activity pattern for several seconds then completely disappear if no other stimuli come.
This kind of short-term memory can keep operative information for seconds, then completely forget it to avoid overlapping with forthcoming patterns.
We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
arXiv Detail & Related papers (2021-08-31T16:13:15Z) - Neural Oscillations for Encoding and Decoding Declarative Memory using
EEG Signals [1.713291434132985]
This study investigates neural oscillations changes related to memory process.
For encoding phase, there was a significant decrease of power in low beta, high beta bands over fronto-central area.
For decoding phase, only significant decreases of alpha power were observed over fronto-central area.
arXiv Detail & Related papers (2020-02-04T04:53:30Z) - Encoding-based Memory Modules for Recurrent Neural Networks [79.42778415729475]
We study the memorization subtask from the point of view of the design and training of recurrent neural networks.
We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences.
arXiv Detail & Related papers (2020-01-31T11:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.