Learning representations in Bayesian Confidence Propagation neural
networks
- URL: http://arxiv.org/abs/2003.12415v1
- Date: Fri, 27 Mar 2020 13:47:16 GMT
- Title: Learning representations in Bayesian Confidence Propagation neural
networks
- Authors: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
- Abstract summary: Unsupervised learning of hierarchical representations has been one of the most vibrant research directions in deep learning.
In this work we study biologically inspired unsupervised strategies in neural networks based on local Hebbian learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unsupervised learning of hierarchical representations has been one of the
most vibrant research directions in deep learning during recent years. In this
work we study biologically inspired unsupervised strategies in neural networks
based on local Hebbian learning. We propose new mechanisms to extend the
Bayesian Confidence Propagating Neural Network (BCPNN) architecture, and
demonstrate their capability for unsupervised learning of salient hidden
representations when tested on the MNIST dataset.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks [16.60622265961373]
Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence.
Recent advances in literature have studied the network representations of deep neural networks.
arXiv Detail & Related papers (2024-03-19T05:37:26Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Spiking neural networks with Hebbian plasticity for unsupervised
representation learning [0.0]
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure.
We employ an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network.
We show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
arXiv Detail & Related papers (2023-05-05T22:34:54Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Wide Neural Networks Forget Less Catastrophically [39.907197907411266]
We study the impact of "width" of the neural network architecture on catastrophic forgetting.
We study the learning dynamics of the network from various perspectives.
arXiv Detail & Related papers (2021-10-21T23:49:23Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - D2RL: Deep Dense Architectures in Reinforcement Learning [47.67475810050311]
We take inspiration from successful architectural choices in computer vision and generative modelling.
We investigate the use of deeper networks and dense connections for reinforcement learning on a variety of simulated robotic learning benchmark environments.
arXiv Detail & Related papers (2020-10-19T01:27:07Z) - DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face
Anti-Spoofing [34.68682691052962]
We propose a novel framework based on the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN)
In particular, we model the behavior of exploring face-spoofing-related information from image sub-patches by leveraging deep reinforcement learning.
For the classification purpose, we fuse the local information with the global one, which can be learned from the original input image through a CNN.
arXiv Detail & Related papers (2020-09-16T07:58:01Z) - Brain-like approaches to unsupervised learning of hidden representations
-- a comparative study [0.0]
We study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations.
The usefulness and class-dependent separability of the hidden representations when trained on MNIST and Fashion-MNIST datasets is studied.
arXiv Detail & Related papers (2020-05-06T11:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.