Hebbian Learning with Global Direction
- URL: http://arxiv.org/abs/2601.21367v1
- Date: Thu, 29 Jan 2026 07:49:21 GMT
- Title: Hebbian Learning with Global Direction
- Authors: Wenjia Hua, Kejie Zhao, Luziwei Leng, Ran Cheng, Yuxin Ma, Qinghai Guo,
- Abstract summary: Hebbian learning has attracted considerable interest as a biologically plausible alternative to backpropagation.<n>We introduce a novel model-agnostic Global-guided Hebbian Learning framework, which seamlessly integrates local and global information.<n>Our method consistently outperforms existing Hebbian approaches.
- Score: 23.81487976986755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backpropagation algorithm has driven the remarkable success of deep neural networks, but its lack of biological plausibility and high computational costs have motivated the ongoing search for alternative training methods. Hebbian learning has attracted considerable interest as a biologically plausible alternative to backpropagation. Nevertheless, its exclusive reliance on local information, without consideration of global task objectives, fundamentally limits its scalability. Inspired by the biological synergy between neuromodulators and local plasticity, we introduce a novel model-agnostic Global-guided Hebbian Learning (GHL) framework, which seamlessly integrates local and global information to scale up across diverse networks and tasks. In specific, the local component employs Oja's rule with competitive learning to ensure stable and effective local updates. Meanwhile, the global component introduces a sign-based signal that guides the direction of local Hebbian plasticity updates. Extensive experiments demonstrate that our method consistently outperforms existing Hebbian approaches. Notably, on large-scale network and complex datasets like ImageNet, our framework achieves the competitive results and significantly narrows the gap with standard backpropagation.
Related papers
- Power Grid Control with Graph-Based Distributed Reinforcement Learning [60.49805771047161]
This work advances a graph-based distributed reinforcement learning framework for real-time, scalable grid management.<n>A Graph Neural Network (GNN) is employed to encode the network's topological information within the single low-level agent's observation.<n>Experiments on the Grid2Op simulation environment show the effectiveness of the approach.
arXiv Detail & Related papers (2025-09-02T22:17:25Z) - Stochastic Layer-wise Learning: Scalable and Efficient Alternative to Backpropagation [1.0285749562751982]
Backpropagation underpins modern deep learning, yet its reliance on global synchronization limits scalability and incurs high memory costs.<n>In contrast, fully local learning rules are more efficient but often struggle to maintain the cross-layer coordination needed for coherent global learning.<n>We introduce Layer-wise Learning (SLL), a layer-wise training algorithm that decomposes the global objective into coordinated layer-local updates.
arXiv Detail & Related papers (2025-05-08T12:32:29Z) - Towards Interpretable Deep Local Learning with Successive Gradient Reconciliation [70.43845294145714]
Relieving the reliance of neural network training on a global back-propagation (BP) has emerged as a notable research topic.
We propose a local training strategy that successively regularizes the gradient reconciliation between neighboring modules.
Our method can be integrated into both local-BP and BP-free settings.
arXiv Detail & Related papers (2024-06-07T19:10:31Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - A General Framework for Interpretable Neural Learning based on Local Information-Theoretic Goal Functions [1.5236380958983644]
We introduce 'infomorphic' neural networks to perform tasks from supervised, unsupervised and memory learning.<n>By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.
arXiv Detail & Related papers (2023-06-03T16:34:25Z) - Locally Supervised Learning with Periodic Global Guidance [19.41730292017383]
We propose Periodically Guided local Learning (PGL) to reinstate the global objective repetitively into the local-loss based training of neural networks.
We show that a simple periodic guidance scheme begets significant performance gains while having a low memory footprint.
arXiv Detail & Related papers (2022-08-01T13:06:26Z) - Multi-Level Branched Regularization for Federated Learning [46.771459325434535]
We propose a novel architectural regularization technique that constructs multiple auxiliary branches in each local model by grafting local and globalworks at several different levels.
We demonstrate remarkable performance gains in terms of accuracy and efficiency compared to existing methods.
arXiv Detail & Related papers (2022-07-14T13:59:26Z) - An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning [77.72330187258498]
We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
arXiv Detail & Related papers (2021-11-03T11:13:13Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Navigating the Kaleidoscope of COVID-19 Misinformation Using Deep
Learning [0.76146285961466]
We propose an effective model to capture both the local and global context of the target domain.
We show that: (i) the deep Transformer-based pre-trained models, utilized via the mixed-domain transfer learning, are only good at capturing the local context, thus exhibits poor generalization.
A combination of shallow network-based domain-specific models and convolutional neural networks can efficiently extract local as well as global context directly from the target data in a hierarchical fashion, enabling it to offer a more generalizable solution.
arXiv Detail & Related papers (2021-09-19T15:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.