Brain-like approaches to unsupervised learning of hidden representations
-- a comparative study
- URL: http://arxiv.org/abs/2005.03476v2
- Date: Fri, 16 Apr 2021 13:22:54 GMT
- Title: Brain-like approaches to unsupervised learning of hidden representations
-- a comparative study
- Authors: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
- Abstract summary: We study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations.
The usefulness and class-dependent separability of the hidden representations when trained on MNIST and Fashion-MNIST datasets is studied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Unsupervised learning of hidden representations has been one of the most
vibrant research directions in machine learning in recent years. In this work
we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN)
model, recently extended to extract sparse distributed high-dimensional
representations. The usefulness and class-dependent separability of the hidden
representations when trained on MNIST and Fashion-MNIST datasets is studied
using an external linear classifier and compared with other unsupervised
learning methods that include restricted Boltzmann machines and autoencoders.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Non-Parametric Representation Learning with Kernels [6.944372188747803]
We introduce and analyze several kernel-based representation learning approaches.
We argue that the classical representer theorems for supervised kernel machines are not always applicable for (self-supervised) representation learning.
We empirically evaluate the performance of these methods in both small data regimes as well as in comparison with neural network based models.
arXiv Detail & Related papers (2023-09-05T08:14:25Z) - Spiking neural networks with Hebbian plasticity for unsupervised
representation learning [0.0]
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure.
We employ an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network.
We show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
arXiv Detail & Related papers (2023-05-05T22:34:54Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - A Convolutional Deep Markov Model for Unsupervised Speech Representation
Learning [32.59760685342343]
Probabilistic Latent Variable Models provide an alternative to self-supervised learning approaches for linguistic representation learning from speech.
In this work, we propose ConvDMM, a Gaussian state-space model with non-linear emission and transition functions modelled by deep neural networks.
When trained on a large scale speech dataset (LibriSpeech), ConvDMM produces features that significantly outperform multiple self-supervised feature extracting methods.
arXiv Detail & Related papers (2020-06-03T21:50:20Z) - Learning representations in Bayesian Confidence Propagation neural
networks [0.0]
Unsupervised learning of hierarchical representations has been one of the most vibrant research directions in deep learning.
In this work we study biologically inspired unsupervised strategies in neural networks based on local Hebbian learning.
arXiv Detail & Related papers (2020-03-27T13:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.