Sparse, Geometric Autoencoder Models of V1
- URL: http://arxiv.org/abs/2302.11162v1
- Date: Wed, 22 Feb 2023 06:07:20 GMT
- Title: Sparse, Geometric Autoencoder Models of V1
- Authors: Jonathan Huml, Abiy Tasissa, Demba Ba
- Abstract summary: We propose an autoencoder architecture whose latent representations are implicitly, locally organized for spectral clustering.
We show that the autoencoder objective function maintains core ideas of the sparse coding framework, yet also offers a promising path to describe the differentiation of receptive fields.
- Score: 2.491226380993217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The classical sparse coding model represents visual stimuli as a linear
combination of a handful of learned basis functions that are Gabor-like when
trained on natural image data. However, the Gabor-like filters learned by
classical sparse coding far overpredict well-tuned simple cell receptive field
(SCRF) profiles. A number of subsequent models have either discarded the sparse
dictionary learning framework entirely or have yet to take advantage of the
surge in unrolled, neural dictionary learning architectures. A key missing
theme of these updates is a stronger notion of \emph{structured sparsity}. We
propose an autoencoder architecture whose latent representations are
implicitly, locally organized for spectral clustering, which begets artificial
neurons better matched to observed primate data. The weighted-$\ell_1$ (WL)
constraint in the autoencoder objective function maintains core ideas of the
sparse coding framework, yet also offers a promising path to describe the
differentiation of receptive fields in terms of a discriminative hierarchy in
future work.
Related papers
- Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning [86.15009879251386]
We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBM)
CBMs require an additional set of concepts to leverage.
We show a significant increase in accuracy using sparse hidden layers in CLIP-based bottleneck models.
arXiv Detail & Related papers (2024-04-04T09:43:43Z) - Clustering Inductive Biases with Unrolled Networks [4.47196217712431]
We propose an autoencoder architecture (WLSC) whose latent representations are implicitly, locally organized for spectral clustering through a Laplacian quadratic form of a bipartite graph.
We show that our regularization can be interpreted as early-stage specialization of receptive fields to certain classes of stimuli.
arXiv Detail & Related papers (2023-11-30T02:02:30Z) - Towards Realistic Zero-Shot Classification via Self Structural Semantic
Alignment [53.2701026843921]
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification.
In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary.
We propose the Self Structural Semantic Alignment (S3A) framework, which extracts structural semantic information from unlabeled data while simultaneously self-learning.
arXiv Detail & Related papers (2023-08-24T17:56:46Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Towards Disentangling Information Paths with Coded ResNeXt [11.884259630414515]
We take a novel approach to enhance the transparency of the function of the whole network.
We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths.
arXiv Detail & Related papers (2022-02-10T21:45:49Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Improved Training of Sparse Coding Variational Autoencoder via Weight
Normalization [0.0]
We focus on a recently proposed model, sparse coding variational autoencoder (SVAE)
We show that projection of the filters onto unit norm drastically increases the number of active filters.
Our results highlight the importance of weight normalization for learning sparse representation from data.
arXiv Detail & Related papers (2021-01-23T08:07:20Z) - The Interpretable Dictionary in Sparse Coding [4.205692673448206]
In our work, we illustrate that an ANN, trained using sparse coding under specific sparsity constraints, yields a more interpretable model than the standard deep learning model.
The dictionary learned by sparse coding can be more easily understood and the activations of these elements creates a selective feature output.
arXiv Detail & Related papers (2020-11-24T00:26:40Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.