Contrastive Similarity Matching for Supervised Learning
- URL: http://arxiv.org/abs/2002.10378v5
- Date: Sun, 6 Dec 2020 02:09:28 GMT
- Title: Contrastive Similarity Matching for Supervised Learning
- Authors: Shanshan Qin, Nayantara Mudur and Cengiz Pehlevan
- Abstract summary: We propose a biologically-plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks.
In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar.
We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections.
- Score: 13.750624267664156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel biologically-plausible solution to the credit assignment
problem motivated by observations in the ventral visual pathway and trained
deep neural networks. In both, representations of objects in the same category
become progressively more similar, while objects belonging to different
categories become less similar. We use this observation to motivate a
layer-specific learning goal in a deep network: each layer aims to learn a
representational similarity matrix that interpolates between previous and later
layers. We formulate this idea using a contrastive similarity matching
objective function and derive from it deep neural networks with feedforward,
lateral, and feedback connections, and neurons that exhibit
biologically-plausible Hebbian and anti-Hebbian plasticity. Contrastive
similarity matching can be interpreted as an energy-based learning algorithm,
but with significant differences from others in how a contrastive function is
constructed.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Decoupling Semantic Similarity from Spatial Alignment for Neural Networks [4.801683210246596]
We argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers.
This should be reflected in the definition of similarity between image responses for computer vision systems.
We measure semantic similarity between input responses by formulating it as a set-matching problem.
arXiv Detail & Related papers (2024-10-30T15:17:58Z) - Deep Neural Networks Can Learn Generalizable Same-Different Visual
Relations [22.205838756057314]
We study whether deep neural networks can acquire and generalize same-different relations both within and out-of-distribution.
We find that certain pretrained transformers can learn a same-different relation that generalizes with near perfect accuracy to out-of-distribution stimuli.
arXiv Detail & Related papers (2023-10-14T16:28:57Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Grounding Psychological Shape Space in Convolutional Neural Networks [0.0]
We use convolutional neural networks to learn a generalizable mapping between perceptual inputs and a recently proposed psychological similarity space for the shape domain.
Our results indicate that a classification-based multi-task learning scenario yields the best results, but that its performance is relatively sensitive to the dimensionality of the similarity space.
arXiv Detail & Related papers (2021-11-16T12:21:07Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - Comparing Deep Neural Nets with UMAP Tour [12.910602784766562]
UMAP Tour is built to visually inspect and compare internal behavior of real-world neural network models.
We find concepts learned in state-of-the-art models and dissimilarities between them, such as GoogLeNet and ResNet.
arXiv Detail & Related papers (2021-10-18T15:59:13Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.