Advancing the Biological Plausibility and Efficacy of Hebbian Convolutional Neural Networks
- URL: http://arxiv.org/abs/2501.17266v2
- Date: Fri, 28 Mar 2025 16:11:52 GMT
- Title: Advancing the Biological Plausibility and Efficacy of Hebbian Convolutional Neural Networks
- Authors: Julian Jimenez Nimmo, Esther Mondragon,
- Abstract summary: The research presented in this paper advances the integration of Hebbian learning into Convolutional Neural Networks (CNNs) for image processing.<n>Hebbian learning operates on local unsupervised neural information to form feature representations.<n>Results showed clear indications of sparse hierarchical learning through increasingly complex and receptive fields.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research presented in this paper advances the integration of Hebbian learning into Convolutional Neural Networks (CNNs) for image processing, systematically exploring different architectures to build an optimal configuration, adhering to biological tenability. Hebbian learning operates on local unsupervised neural information to form feature representations, providing an alternative to the popular but arguably biologically implausible and computationally intensive backpropagation learning algorithm. The suggested optimal architecture significantly enhances recent research aimed at integrating Hebbian learning with competition mechanisms and CNNs, expanding their representational capabilities by incorporating hard Winner-Takes-All (WTA) competition, Gaussian lateral inhibition mechanisms, and Bienenstock-Cooper-Munro (BCM) learning rule in a single model. Mean accuracy classification measures during the last half of test epochs on CIFAR-10 revealed that the resulting optimal model matched its end-to-end backpropagation variant with 75.2% each, critically surpassing the state-of-the-art hard-WTA performance in CNNs of the same network depth (64.6%) by 10.6%. It also achieved competitive performance on MNIST (98%) and STL-10 (69.5%). Moreover, results showed clear indications of sparse hierarchical learning through increasingly complex and abstract receptive fields. In summary, our implementation enhances both the performance and the generalisability of the learnt representations and constitutes a crucial step towards more biologically realistic artificial neural networks.
Related papers
- Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks [4.014396794141682]
We propose a novel approach that integrates plasticityally-activated engrams as a gating mechanism for metaplastic binarized neural networks (mBNNs)
Our findings demonstrate (A) an improved stability vs. a trade-off, (B) a reduced memory intensiveness, and (C) an enhanced performance in binarized architectures.
arXiv Detail & Related papers (2025-03-27T12:21:00Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.<n>The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Enhancing Fast Feed Forward Networks with Load Balancing and a Master Leaf Node [49.08777822540483]
Fast feedforward networks (FFFs) exploit the observation that different regions of the input space activate distinct subsets of neurons in wide networks.
We propose the incorporation of load balancing and Master Leaf techniques into the FFF architecture to improve performance and simplify the training process.
arXiv Detail & Related papers (2024-05-27T05:06:24Z) - Leveraging Pre-trained CNNs for Efficient Feature Extraction in Rice Leaf Disease Classification [1.4874449172133892]
We rigorously evaluate the impact of integrating feature extraction methodologies within pre-trained convolutional neural networks (CNNs)
Histogram of Oriented Gradients (HOG) yielded substantial improvements across architectures, notably propelling the accuracy of EfficientNet-B7 from 92% to an impressive 97%.
Grad-CAM unveiled that HOG integration resulted in heightened attention to disease-specific features, corroborating the performance enhancements observed.
arXiv Detail & Related papers (2024-02-26T07:19:48Z) - Augmentation is AUtO-Net: Augmentation-Driven Contrastive Multiview
Learning for Medical Image Segmentation [3.1002416427168304]
This thesis focuses on retinal blood vessel segmentation tasks.
It provides an extensive literature review of deep learning-based medical image segmentation approaches.
It proposes a novel efficient, simple multiview learning framework.
arXiv Detail & Related papers (2023-11-02T06:31:08Z) - Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference [13.924924047051782]
Deep convolution architectures for Spiking Neural Networks (SNNs) have significantly enhanced image classification performance and reduced computational burdens.
This research explores a new pathway, drawing inspiration from the progress made in Multi-Layer Perceptrons (MLPs)
We propose an innovative spiking architecture that uses batch normalization to retain MFI compatibility.
We establish an efficient multi-stage spiking network that blends effectively global receptive fields with local feature extraction.
arXiv Detail & Related papers (2023-06-21T16:52:20Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Multiobjective Evolutionary Pruning of Deep Neural Networks with
Transfer Learning for improving their Performance and Robustness [15.29595828816055]
This work proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm.
We use Transfer Learning to adapt the last layers of Deep Neural Networks, by replacing them with sparse layers evolved by a genetic algorithm.
Experiments show that our proposal achieves promising results in all the objectives, and direct relation are presented.
arXiv Detail & Related papers (2023-02-20T19:33:38Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - A Robust Backpropagation-Free Framework for Images [47.97322346441165]
We present an error kernel driven activation alignment algorithm for image data.
EKDAA accomplishes through the introduction of locally derived error transmission kernels and error maps.
Results are presented for an EKDAA trained CNN that employs a non-differentiable activation function.
arXiv Detail & Related papers (2022-06-03T21:14:10Z) - Biologically inspired deep residual networks for computer vision
applications [0.0]
We propose a biologically inspired deep residual neural network where the hexagonal convolutions are introduced along the skip connections.
The proposed approach advances the baseline image classification accuracy of vanilla ResNet architectures.
arXiv Detail & Related papers (2022-05-05T10:23:43Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Sampled Training and Node Inheritance for Fast Evolutionary Neural
Architecture Search [22.483917379706725]
evolutionary neural architecture search (ENAS) has received increasing attention due to the attractive global optimization capability of evolutionary algorithms.
This paper proposes a new framework for fast ENAS based on directed acyclic graph, in which parents are randomly sampled and trained on each mini-batch of training data.
We evaluate the proposed algorithm on the widely used datasets, in comparison with 26 state-of-the-art peer algorithms.
arXiv Detail & Related papers (2020-03-07T12:33:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.