Accelerated Predictive Coding Networks via Direct Kolen-Pollack Feedback Alignment
- URL: http://arxiv.org/abs/2602.15571v1
- Date: Tue, 17 Feb 2026 13:29:14 GMT
- Title: Accelerated Predictive Coding Networks via Direct Kolen-Pollack Feedback Alignment
- Authors: Davide Casnici, Martin Lefebvre, Justin Dauwels, Charlotte Frenkel,
- Abstract summary: Predictive coding (PC) is a biologically inspired algorithm for training neural networks that relies only on local updates.<n>We propose direct Kolen-Pollack predictive coding (DKP-PC)<n>It simultaneously addresses both feedback delay and exponential decay, yielding a more efficient and scalable variant of PC.
- Score: 7.328567184271344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive coding (PC) is a biologically inspired algorithm for training neural networks that relies only on local updates, allowing parallel learning across layers. However, practical implementations face two key limitations: error signals must still propagate from the output to early layers through multiple inference-phase steps, and feedback decays exponentially during this process, leading to vanishing updates in early layers. We propose direct Kolen-Pollack predictive coding (DKP-PC), which simultaneously addresses both feedback delay and exponential decay, yielding a more efficient and scalable variant of PC while preserving update locality. Leveraging direct feedback alignment and direct Kolen-Pollack algorithms, DKP-PC introduces learnable feedback connections from the output layer to all hidden layers, establishing a direct pathway for error transmission. This yields an algorithm that reduces the theoretical error propagation time complexity from O(L), with L being the network depth, to O(1), removing depth-dependent delay in error signals. Moreover, empirical results demonstrate that DKP-PC achieves performance at least comparable to, and often exceeding, that of standard PC, while offering improved latency and computational performance, supporting its potential for custom hardware-efficient implementations.
Related papers
- When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training [58.25341036646294]
We analytically examine why learning recurrent poles does not provide tangible benefits in data and empirically offer real-time learning scenarios.<n>We show that fixed-pole networks achieve superior performance with lower training complexity, making them more suitable for online real-time tasks.
arXiv Detail & Related papers (2026-02-25T00:15:13Z) - GNN-based Path-aware multi-view Circuit Learning for Technology Mapping [8.368416885163859]
We introduce GPA(graph neural network (GNN)-based Path-Aware multi-view circuit learning), a novel GNN framework that learns precise, data-driven delay predictions.<n> GPA achieves 19.9%, 2.1% and 4.1% average delay reduction over the conventionalgnostics methods.
arXiv Detail & Related papers (2026-01-14T02:15:48Z) - Efficient Online Learning with Predictive Coding Networks: Exploiting Temporal Correlations [26.073347035678342]
Predictive Coding (PC) framework offers a biologically plausible alternative with local, Hebbian-like update rules.<n>We present Predictive Coding Network with Temporal Amortization (PCN-TA), which preserves latent states across temporal frames.<n>Experiments on the COIL-20 robotic perception dataset demonstrate that PCN-TA achieves 10% fewer weight updates compared to backpropagation.
arXiv Detail & Related papers (2025-10-29T22:09:53Z) - Scalable Equilibrium Propagation via Intermediate Error Signals for Deep Convolutional CRNNs [17.067785532606724]
Equilibrium Propagation (EP) is a biologically inspired local learning rule first proposed for convergent recurrent neural networks (CRNNs)<n>EP estimates gradients that closely align with those computed by Backpropagation Through Time (BPTT) while significantly reducing computational demands.<n>We propose a novel EP framework that incorporates intermediate error signals to enhance information flow and convergence of neuron dynamics.
arXiv Detail & Related papers (2025-08-21T22:19:30Z) - Intra-DP: A High Performance Collaborative Inference System for Mobile Edge Computing [67.98609858326951]
Intra-DP is a high-performance collaborative inference system optimized for deep neural networks (DNNs) on mobile devices.<n>It reduces per-inference latency by up to 50% and energy consumption by up to 75% compared to state-of-the-art baselines.<n>The evaluation demonstrates that Intra-DP reduces per-inference latency by up to 50% and energy consumption by up to 75% compared to state-of-the-art baselines.
arXiv Detail & Related papers (2025-07-08T09:50:57Z) - Towards the Training of Deeper Predictive Coding Neural Networks [44.14001498773255]
Predictive coding networks are neural models that perform inference through an iterative energy minimization process.<n>While effective in shallow architectures, they suffer significant performance degradation beyond five to seven layers.<n>We show that this degradation is caused by exponentially imbalanced errors between layers during weight updates, and by predictions from the previous layers not being effective in guiding updates in deeper layers.
arXiv Detail & Related papers (2025-06-30T12:44:47Z) - Geminet: Learning the Duality-based Iterative Process for Lightweight Traffic Engineering in Changing Topologies [53.38648279089736]
Geminet is a lightweight and scalable ML-based TE framework that can handle changing topologies.<n>Its neural network size is only 0.04% to 7% of existing schemes.<n>When trained on large-scale topologies, Geminet consumes under 10 GiB of memory, more than eight times less than the 80-plus GiB required by HARP.
arXiv Detail & Related papers (2025-06-30T09:09:50Z) - ePC: Overcoming Exponential Signal Decay in Deep Predictive Coding Networks [9.400040788307223]
Predictive Coding (PC) offers a biologically plausible alternative to backpropagation for neural network training.<n>This paper identifies the root cause and provides a principled solution.
arXiv Detail & Related papers (2025-05-26T15:39:16Z) - Fast Training of Recurrent Neural Networks with Stationary State Feedbacks [48.22082789438538]
Recurrent neural networks (RNNs) have recently demonstrated strong performance and faster inference than Transformers.<n>We propose a novel method that replaces BPTT with a fixed gradient feedback mechanism.
arXiv Detail & Related papers (2025-03-29T14:45:52Z) - Joint Transmit and Pinching Beamforming for Pinching Antenna Systems (PASS): Optimization-Based or Learning-Based? [89.05848771674773]
A novel antenna system ()-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed.<n>It consists of multiple waveguides, which equip numerous low-cost antennas, named (PAs)<n>The positions of PAs can be reconfigured to both spanning large-scale path and space.
arXiv Detail & Related papers (2025-02-12T18:54:10Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP [62.81300791178381]
The bottleneck of distributed edge learning over wireless has shifted from computing to communication.
Existing TCP-based data networking schemes for DEL are application-agnostic and fail to deliver adjustments according to application layer requirements.
We develop a hybrid multipath TCP (MP TCP) by combining model-based and deep reinforcement learning (DRL) based MP TCP for DEL.
arXiv Detail & Related papers (2022-11-03T09:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.