Representation Learning in a Decomposed Encoder Design for Bio-inspired
Hebbian Learning
- URL: http://arxiv.org/abs/2401.08603v1
- Date: Wed, 22 Nov 2023 07:58:14 GMT
- Title: Representation Learning in a Decomposed Encoder Design for Bio-inspired
Hebbian Learning
- Authors: Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh
- Abstract summary: We propose a modular framework trained with a bio-inspired variant of contrastive predictive coding (Hinge CLAPP Loss)
Our findings indicate that this form of inductive bias can be beneficial in closing the gap between models with local plasticity rules and backpropagation models.
- Score: 6.199300239433395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern data-driven machine learning system designs exploit inductive biases
on architectural structure, invariance and equivariance requirements, task
specific loss functions, and computational optimization tools. Previous works
have illustrated that inductive bias in the early layers of the encoder in the
form of human specified quasi-invariant filters can serve as a powerful
inductive bias to attain better robustness and transparency in learned
classifiers. This paper explores this further in the context of representation
learning with local plasticity rules i.e. bio-inspired Hebbian learning . We
propose a modular framework trained with a bio-inspired variant of contrastive
predictive coding (Hinge CLAPP Loss). Our framework is composed of parallel
encoders each leveraging a different invariant visual descriptor as an
inductive bias. We evaluate the representation learning capacity of our system
in a classification scenario on image data of various difficulties (GTSRB,
STL10, CODEBRIM) as well as video data (UCF101). Our findings indicate that
this form of inductive bias can be beneficial in closing the gap between models
with local plasticity rules and backpropagation models as well as learning more
robust representations in general.
Related papers
- Engineered Ordinary Differential Equations as Classification Algorithm (EODECA): thorough characterization and testing [0.9786690381850358]
We present EODECA, a novel approach at the intersection of machine learning and dynamical systems theory.
EODECA's design incorporates the ability to embed stable attractors in the phase space, enhancing reliability and allowing for reversible dynamics.
We demonstrate EODECA's effectiveness on the MNIST and Fashion MNIST datasets, achieving impressive accuracies of $98.06%$ and $88.21%$, respectively.
arXiv Detail & Related papers (2023-12-22T13:34:18Z) - UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Interpretable Sentence Representation with Variational Autoencoders and
Attention [0.685316573653194]
We develop methods to enhance the interpretability of recent representation learning techniques in natural language processing (NLP)
We leverage Variational Autoencoders (VAEs) due to their efficiency in relating observations to latent generative factors.
We build two models with inductive bias to separate information in latent representations into understandable concepts without annotated data.
arXiv Detail & Related papers (2023-05-04T13:16:15Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - Entropy optimized semi-supervised decomposed vector-quantized
variational autoencoder model based on transfer learning for multiclass text
classification and generation [3.9318191265352196]
We propose a semisupervised discrete latent variable model for multi-class text classification and text generation.
The proposed model employs the concept of transfer learning for training a quantized transformer model.
Experimental results indicate that the proposed model has surpassed the state-of-the-art models remarkably.
arXiv Detail & Related papers (2021-11-10T07:07:54Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Robust Training of Vector Quantized Bottleneck Models [21.540133031071438]
We demonstrate methods for reliable and efficient training of discrete representation using Vector-Quantized Variational Auto-Encoder models (VQ-VAEs)
For unsupervised representation learning, they became viable alternatives to continuous latent variable models such as the Variational Auto-Encoder (VAE)
arXiv Detail & Related papers (2020-05-18T08:23:41Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.