Differentiable Generalised Predictive Coding
- URL: http://arxiv.org/abs/2112.03378v2
- Date: Wed, 8 Dec 2021 20:06:44 GMT
- Title: Differentiable Generalised Predictive Coding
- Authors: Andr\'e Ofner, Sebastian Stober
- Abstract summary: This paper deals with differentiable dynamical models congruent with neural process theories that cast brain function as the hierarchical refinement of an internal generative model explaining observations.
Our work extends existing implementations of gradient-based predictive coding and allows to integrate deep neural networks for non-linear state parameterization.
- Score: 2.868176771215219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper deals with differentiable dynamical models congruent with neural
process theories that cast brain function as the hierarchical refinement of an
internal generative model explaining observations. Our work extends existing
implementations of gradient-based predictive coding with automatic
differentiation and allows to integrate deep neural networks for non-linear
state parameterization. Gradient-based predictive coding optimises inferred
states and weights locally in for each layer by optimising precision-weighted
prediction errors that propagate from stimuli towards latent states.
Predictions flow backwards, from latent states towards lower layers. The model
suggested here optimises hierarchical and dynamical predictions of latent
states. Hierarchical predictions encode expected content and hierarchical
structure. Dynamical predictions capture changes in the encoded content along
with higher order derivatives. Hierarchical and dynamical predictions interact
and address different aspects of the same latent states. We apply the model to
various perception and planning tasks on sequential data and show their mutual
dependence. In particular, we demonstrate how learning sampling distances in
parallel address meaningful locations data sampled at discrete time steps. We
discuss possibilities to relax the assumption of linear hierarchies in favor of
more flexible graph structure with emergent properties. We compare the granular
structure of the model with canonical microcircuits describing predictive
coding in biological networks and review the connection to Markov Blankets as a
tool to characterize modularity. A final section sketches out ideas for
efficient perception and planning in nested spatio-temporal hierarchies.
Related papers
- Trajectory Forecasting through Low-Rank Adaptation of Discrete Latent Codes [36.12653178844828]
Trajectory forecasting is crucial for video surveillance analytics, as it enables the anticipation of future movements for a set of agents.
We introduce Vector Quantized Variational Autoencoders (VQ-VAEs), which utilize a discrete latent space to tackle the issue of posterior collapse.
We show that such a two-fold framework, augmented with instance-level discretization, leads to accurate and diverse forecasts.
arXiv Detail & Related papers (2024-05-31T10:13:17Z) - PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Developing hierarchical anticipations via neural network-based event
segmentation [14.059479351946386]
We model the development of hierarchical predictions via autonomously learned latent event codes.
We present a hierarchical recurrent neural network architecture, whose inductive learning biases foster the development of sparsely changing latent state.
A higher level network learns to predict the situations in which the latent states tend to change.
arXiv Detail & Related papers (2022-06-04T18:54:31Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Predictive coding, precision and natural gradients [2.1601966913620325]
We show that hierarchical predictive coding networks with learnable precision are able to solve various supervised and unsupervised learning tasks.
When applied to unsupervised auto-encoding of image inputs, the deterministic network produces hierarchically organized and disentangled embeddings.
arXiv Detail & Related papers (2021-11-12T21:05:03Z) - Variational Predictive Routing with Nested Subjective Timescales [1.6114012813668934]
We present Variational Predictive Routing (PRV) - a neural inference system that organizes latent video features in a temporal hierarchy.
We show that VPR is able to detect event boundaries, disentangletemporal features, adapt to the dynamics hierarchy of the data, and produce accurate time-agnostic rollouts of the future.
arXiv Detail & Related papers (2021-10-21T16:12:59Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Hierarchical regularization networks for sparsification based learning
on noisy datasets [0.0]
hierarchy follows from approximation spaces identified at successively finer scales.
For promoting model generalization at each scale, we also introduce a novel, projection based penalty operator across multiple dimension.
Results show the performance of the approach as a data reduction and modeling strategy on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-09T18:32:24Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.