Energy-Efficient Information Representation in MNIST Classification Using Biologically Inspired Learning
- URL: http://arxiv.org/abs/2603.00588v1
- Date: Sat, 28 Feb 2026 10:38:57 GMT
- Title: Energy-Efficient Information Representation in MNIST Classification Using Biologically Inspired Learning
- Authors: Patrick Stricker, Florian Röhrbein, Andreas Knoblauch,
- Abstract summary: We analyze our previously developed biologically inspired learning rule using information-theoretic concepts.<n>It emulates the brain's structural plasticity and retains only the essential number of synapses.<n>It also eliminates the need for pre-optimization of network architecture, enhances adaptability, and reflects the brain's ability to reserve'space' for new memories.
- Score: 1.0787328610467803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient representation learning is essential for optimal information storage and classification. However, it is frequently overlooked in artificial neural networks (ANNs). This neglect results in networks that can become overparameterized by factors of up to 13, increasing redundancy and energy consumption. As the demand for large language models (LLMs) and their scale increase, these issues are further highlighted, raising significant ethical and environmental concerns. We analyze our previously developed biologically inspired learning rule using information-theoretic concepts, evaluating its efficiency on the MNIST classification task. The proposed rule, which emulates the brain's structural plasticity, naturally prevents overparameterization by optimizing synaptic usage and retaining only the essential number of synapses. Furthermore, it outperforms backpropagation (BP) in terms of efficiency and storage capacity. It also eliminates the need for pre-optimization of network architecture, enhances adaptability, and reflects the brain's ability to reserve 'space' for new memories. This approach advances scalable and energy-efficient AI and provides a promising framework for developing brain-inspired models that optimize resource allocation and adaptability.
Related papers
- General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - A Brain-like Synergistic Core in LLMs Drives Behaviour and Learning [50.68188138112555]
We show that large language models spontaneously develop synergistic cores.<n>We find that areas in middle layers exhibit synergistic processing while early and late layers rely on redundancy.<n>This convergence suggests that synergistic information processing is a fundamental property of intelligence.
arXiv Detail & Related papers (2026-01-11T10:48:35Z) - Energy-based Autoregressive Generation for Neural Population Dynamics [12.867288040044501]
We introduce a novel Energy-based Autoregressive Generation framework that employs an energy-based transformer learning temporal dynamics in latent space.<n>We show that EAG achieves state-of-the-art generation quality with substantial computational efficiency improvements.<n>These results demonstrate the effectiveness of energy-based modeling for neural population dynamics with applications in neuroscience research and neural engineering.
arXiv Detail & Related papers (2025-11-18T07:11:29Z) - Spatiotemporal Graph Learning with Direct Volumetric Information Passing and Feature Enhancement [62.91536661584656]
We propose a dual-module framework, Cell-embedded and Feature-enhanced Graph Neural Network (aka, CeFeGNN) for learning.<n>We embed learnable cell attributions to the common node-edge message passing process, which better captures the spatial dependency of regional features.<n>Experiments on various PDE systems and one real-world dataset demonstrate that CeFeGNN achieves superior performance compared with other baselines.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Memory Networks: Towards Fully Biologically Plausible Learning [2.7013801448234367]
Current artificial neural networks rely on techniques like backpropagation and weight sharing, which do not align with the brain's natural information processing methods.
We propose the Memory Network, a model inspired by biological principles that avoids backpropagation and convolutions, and operates in a single pass.
arXiv Detail & Related papers (2024-09-18T06:01:35Z) - Exploring Extreme Quantization in Spiking Language Models [7.986844499514244]
This paper proposes the development of a novel binary/ternary (1/1.58-bit) spiking LM architecture.
Our proposed model represents a significant advancement as the first-of-its-kind 1/1.58-bit spiking LM.
arXiv Detail & Related papers (2024-05-04T03:00:23Z) - The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth [5.869633234882029]
We propose an efficiency strategy that leverages prior knowledge transferred by large models.
Simple but effective, we propose a method relying on an Entropy-bASed Importance mEtRic (EASIER) to reduce the depth of over-parametrized deep neural networks.
arXiv Detail & Related papers (2024-04-27T08:28:25Z) - EMN: Brain-inspired Elastic Memory Network for Quick Domain Adaptive Feature Mapping [57.197694698750404]
We propose a novel gradient-free Elastic Memory Network to support quick fine-tuning of the mapping between features and prediction.<n>EMN adopts randomly connected neurons to memorize the association of features and labels, where the signals in the network are propagated as impulses.<n>EMN can achieve up to 10% enhancement of performance while only needing less than 1% timing cost of traditional domain adaptation methods.
arXiv Detail & Related papers (2024-02-04T09:58:17Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.