Memory Networks: Towards Fully Biologically Plausible Learning
- URL: http://arxiv.org/abs/2409.17282v1
- Date: Wed, 18 Sep 2024 06:01:35 GMT
- Title: Memory Networks: Towards Fully Biologically Plausible Learning
- Authors: Jacobo Ruiz, Manas Gupta
- Abstract summary: Current artificial neural networks rely on techniques like backpropagation and weight sharing, which do not align with the brain's natural information processing methods.
We propose the Memory Network, a model inspired by biological principles that avoids backpropagation and convolutions, and operates in a single pass.
- Score: 2.7013801448234367
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The field of artificial intelligence faces significant challenges in
achieving both biological plausibility and computational efficiency,
particularly in visual learning tasks. Current artificial neural networks, such
as convolutional neural networks, rely on techniques like backpropagation and
weight sharing, which do not align with the brain's natural information
processing methods. To address these issues, we propose the Memory Network, a
model inspired by biological principles that avoids backpropagation and
convolutions, and operates in a single pass. This approach enables rapid and
efficient learning, mimicking the brain's ability to adapt quickly with minimal
exposure to data. Our experiments demonstrate that the Memory Network achieves
efficient and biologically plausible learning, showing strong performance on
simpler datasets like MNIST. However, further refinement is needed for the
model to handle more complex datasets such as CIFAR10, highlighting the need to
develop new algorithms and techniques that closely align with biological
processes while maintaining computational efficiency.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - Biological Neurons Compete with Deep Reinforcement Learning in Sample Efficiency in a Simulated Gameworld [2.003941363902692]
We compare the learning efficiency of in vitro biological neural networks to the state-of-the-art deep reinforcement learning (RL) algorithms in a simplified simulation of the game Pong'
Even when tested across multiple types of information input, biological neurons showcased faster learning than all deep reinforcement learning agents.
arXiv Detail & Related papers (2024-05-27T08:38:17Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Advanced Computing and Related Applications Leveraging Brain-inspired
Spiking Neural Networks [0.0]
Spiking neural network is one of the cores of artificial intelligence which realizes brain-like computing.
This paper summarizes the strengths, weaknesses and applicability of five neuronal models and analyzes the characteristics of five network topologies.
arXiv Detail & Related papers (2023-09-08T16:41:08Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Brain-Inspired Learning on Neuromorphic Substrates [5.279475826661643]
This article provides a mathematical framework for the design of practical online learning algorithms for neuromorphic substrates.
Specifically, we show a direct connection between Real-Time Recurrent Learning (RTRL) and biologically plausible learning rules for training Spiking Neural Networks (SNNs)
We motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity.
arXiv Detail & Related papers (2020-10-22T17:56:59Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.