Active Predictive Coding Networks: A Neural Solution to the Problem of
Learning Reference Frames and Part-Whole Hierarchies
- URL: http://arxiv.org/abs/2201.08813v1
- Date: Fri, 14 Jan 2022 21:22:48 GMT
- Title: Active Predictive Coding Networks: A Neural Solution to the Problem of
Learning Reference Frames and Part-Whole Hierarchies
- Authors: Dimitrios C. Gklezakos, Rajesh P. N. Rao
- Abstract summary: We introduce Active Predictive Coding Networks (APCNs)
APCNs are a new class of neural networks that solve a major problem posed by Hinton and others in the fields of artificial intelligence and brain modeling.
We demonstrate that APCNs can (a) learn to parse images into part-whole hierarchies, (b) learn compositional representations, and (c) transfer their knowledge to unseen classes of objects.
- Score: 1.5990720051907859
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Active Predictive Coding Networks (APCNs), a new class of neural
networks that solve a major problem posed by Hinton and others in the fields of
artificial intelligence and brain modeling: how can neural networks learn
intrinsic reference frames for objects and parse visual scenes into part-whole
hierarchies by dynamically allocating nodes in a parse tree? APCNs address this
problem by using a novel combination of ideas: (1) hypernetworks are used for
dynamically generating recurrent neural networks that predict parts and their
locations within intrinsic reference frames conditioned on higher object-level
embedding vectors, and (2) reinforcement learning is used in conjunction with
backpropagation for end-to-end learning of model parameters. The APCN
architecture lends itself naturally to multi-level hierarchical learning and is
closely related to predictive coding models of cortical function. Using the
MNIST, Fashion-MNIST and Omniglot datasets, we demonstrate that APCNs can (a)
learn to parse images into part-whole hierarchies, (b) learn compositional
representations, and (c) transfer their knowledge to unseen classes of objects.
With their ability to dynamically generate parse trees with part locations for
objects, APCNs offer a new framework for explainable AI that leverages advances
in deep learning while retaining interpretability and compositionality.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Learning Object-Centric Representation via Reverse Hierarchy Guidance [73.05170419085796]
Object-Centric Learning (OCL) seeks to enable Neural Networks to identify individual objects in visual scenes.
RHGNet introduces a top-down pathway that works in different ways in the training and inference processes.
Our model achieves SOTA performance on several commonly used datasets.
arXiv Detail & Related papers (2024-05-17T07:48:27Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Recursive Neural Programs: Variational Learning of Image Grammars and
Part-Whole Hierarchies [1.5990720051907859]
We introduce Recursive Neural Programs (RNPs) to address the part-whole hierarchy learning problem.
RNPs are the first neural generative model to address the part-whole hierarchy learning problem.
Our results show that RNPs provide an intuitive and explainable way of composing objects and scenes.
arXiv Detail & Related papers (2022-06-16T22:02:06Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Evolutionary Architecture Search for Graph Neural Networks [23.691915813153496]
We propose a novel AutoML framework through the evolution of individual models in a large Graph Neural Networks (GNN) architecture space.
To the best of our knowledge, this is the first work to introduce and evaluate evolutionary architecture search for GNN models.
arXiv Detail & Related papers (2020-09-21T22:11:53Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Locality Guided Neural Networks for Explainable Artificial Intelligence [12.435539489388708]
We propose a novel algorithm for back propagation, called Locality Guided Neural Network(LGNN)
LGNN preserves locality between neighbouring neurons within each layer of a deep network.
In our experiments, we train various VGG and Wide ResNet (WRN) networks for image classification on CIFAR100.
arXiv Detail & Related papers (2020-07-12T23:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.