Pyramidal Predictive Network: A Model for Visual-frame Prediction Based
on Predictive Coding Theory
- URL: http://arxiv.org/abs/2208.07021v1
- Date: Mon, 15 Aug 2022 06:28:34 GMT
- Title: Pyramidal Predictive Network: A Model for Visual-frame Prediction Based
on Predictive Coding Theory
- Authors: Chaofan Ling, Junpei Zhong and Weihua Li
- Abstract summary: We propose a novel neural network model for the task of visual-frame prediction.
The model is composed of a series of recurrent and convolutional units forming the top-down and bottom-up streams.
It learns to predict future frames in a visual sequence, with ConvLSTMs on each layer in the network making local prediction from top to down.
- Score: 1.4610038284393165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the well-known predictive coding theory in cognitive science, we
propose a novel neural network model for the task of visual-frame prediction.
In this paper, our main work is to combine the theoretical framework of
predictive coding and deep learning architectures, to design an efficient
predictive network model for visual-frame prediction. The model is composed of
a series of recurrent and convolutional units forming the top-down and
bottom-up streams, respectively. It learns to predict future frames in a visual
sequence, with ConvLSTMs on each layer in the network making local prediction
from top to down. The main innovation of our model is that the update frequency
of neural units on each of the layer decreases with the increasing of network
levels, which results in the model appears like a pyramid from the perspective
of time dimension, so we call it the Pyramid Predictive Network (PPNet).
Particularly, this pyramid-like design is consistent to the neuronal activities
in the neuroscience findings involved in the predictive coding framework.
According to the experimental results, this model shows better compactness and
comparable predictive performance with existing works, implying lower
computational cost and higher prediction accuracy. Code will be available at
https://github.com/Ling-CF/PPNet.
Related papers
- A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Predictive Coding Based Multiscale Network with Encoder-Decoder LSTM for
Video Prediction [1.2537993038844142]
We present a multi-scale predictive coding model for future video frames prediction.
Our model employs a multi-scale approach (Coarse to Fine) where the higher level neurons generate coarser predictions (lower resolution)
We propose several improvements to the training strategy to mitigate the accumulation of prediction errors in long-term prediction.
arXiv Detail & Related papers (2022-12-22T12:15:37Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Predify: Augmenting deep neural networks with brain-inspired predictive
coding dynamics [0.5284812806199193]
We take inspiration from a popular framework in neuroscience: 'predictive coding'
We show that implementing this strategy into two popular networks, VGG16 and EfficientNetB0, improves their robustness against various corruptions.
arXiv Detail & Related papers (2021-06-04T22:48:13Z) - Perceptron Theory Can Predict the Accuracy of Neural Networks [6.136302173351179]
Multilayer neural networks set the current state of the art for many technical classification problems.
But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance.
Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks.
arXiv Detail & Related papers (2020-12-14T19:02:26Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Hierarchical Predictive Coding Models in a Deep-Learning Framework [1.370633147306388]
We review some of the more well known models of predictive coding.
We also survey some recent attempts to cast these models within a deep learning framework.
arXiv Detail & Related papers (2020-05-07T03:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.