Hybrid Predictive Coding: Inferring, Fast and Slow
- URL: http://arxiv.org/abs/2204.02169v2
- Date: Wed, 6 Apr 2022 16:09:28 GMT
- Title: Hybrid Predictive Coding: Inferring, Fast and Slow
- Authors: Alexander Tschantz, Beren Millidge, Anil K Seth, Christopher L Buckley
- Abstract summary: We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive coding is an influential model of cortical neural activity. It
proposes that perceptual beliefs are furnished by sequentially minimising
"prediction errors" - the differences between predicted and observed data.
Implicit in this proposal is the idea that perception requires multiple cycles
of neural activity. This is at odds with evidence that several aspects of
visual perception - including complex forms of object recognition - arise from
an initial "feedforward sweep" that occurs on fast timescales which preclude
substantial recurrent activity. Here, we propose that the feedforward sweep can
be understood as performing amortized inference and recurrent processing can be
understood as performing iterative inference. We propose a hybrid predictive
coding network that combines both iterative and amortized inference in a
principled manner by describing both in terms of a dual optimization of a
single objective function. We show that the resulting scheme can be implemented
in a biologically plausible neural architecture that approximates Bayesian
inference utilising local Hebbian update rules. We demonstrate that our hybrid
predictive coding model combines the benefits of both amortized and iterative
inference -- obtaining rapid and computationally cheap perceptual inference for
familiar data while maintaining the context-sensitivity, precision, and sample
efficiency of iterative inference schemes. Moreover, we show how our model is
inherently sensitive to its uncertainty and adaptively balances iterative and
amortized inference to obtain accurate beliefs using minimum computational
expense. Hybrid predictive coding offers a new perspective on the functional
relevance of the feedforward and recurrent activity observed during visual
perception and offers novel insights into distinct aspects of visual
phenomenology.
Related papers
- Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Adversarial robustness of amortized Bayesian inference [3.308743964406687]
Amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data.
We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples.
We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator.
arXiv Detail & Related papers (2023-05-24T10:18:45Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Efficient Iterative Amortized Inference for Learning Symmetric and
Disentangled Multi-Object Representations [8.163697683448811]
We introduce EfficientMORL, an efficient framework for the unsupervised learning of object-centric representations.
We show that optimization challenges caused by requiring both symmetry and disentanglement can be addressed by high-cost iterative amortized inference.
We demonstrate strong object decomposition and disentanglement on the standard multi-object benchmark while achieving nearly an order of magnitude faster training and test time inference.
arXiv Detail & Related papers (2021-06-07T14:02:49Z) - Parsimonious Inference [0.0]
Parsimonious inference is an information-theoretic formulation of inference over arbitrary architectures.
Our approaches combine efficient encodings with prudent sampling strategies to construct predictive ensembles without cross-validation.
arXiv Detail & Related papers (2021-03-03T04:13:14Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.