Contrastive Similarity Matching for Supervised Learning
- URL: http://arxiv.org/abs/2002.10378v5
- Date: Sun, 6 Dec 2020 02:09:28 GMT
- Title: Contrastive Similarity Matching for Supervised Learning
- Authors: Shanshan Qin, Nayantara Mudur and Cengiz Pehlevan
- Abstract summary: We propose a biologically-plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks.
In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar.
We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections.
- Score: 13.750624267664156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel biologically-plausible solution to the credit assignment
problem motivated by observations in the ventral visual pathway and trained
deep neural networks. In both, representations of objects in the same category
become progressively more similar, while objects belonging to different
categories become less similar. We use this observation to motivate a
layer-specific learning goal in a deep network: each layer aims to learn a
representational similarity matrix that interpolates between previous and later
layers. We formulate this idea using a contrastive similarity matching
objective function and derive from it deep neural networks with feedforward,
lateral, and feedback connections, and neurons that exhibit
biologically-plausible Hebbian and anti-Hebbian plasticity. Contrastive
similarity matching can be interpreted as an energy-based learning algorithm,
but with significant differences from others in how a contrastive function is
constructed.
Related papers
- Deep Neural Networks Can Learn Generalizable Same-Different Visual
Relations [22.205838756057314]
We study whether deep neural networks can acquire and generalize same-different relations both within and out-of-distribution.
We find that certain pretrained transformers can learn a same-different relation that generalizes with near perfect accuracy to out-of-distribution stimuli.
arXiv Detail & Related papers (2023-10-14T16:28:57Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Similarity of Neural Architectures using Adversarial Attack Transferability [47.66096554602005]
We design a quantitative and scalable similarity measure between neural architectures.
We conduct a large-scale analysis on 69 state-of-the-art ImageNet classifiers.
Our results provide insights into why developing diverse neural architectures with distinct components is necessary.
arXiv Detail & Related papers (2022-10-20T16:56:47Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Grounding Psychological Shape Space in Convolutional Neural Networks [0.0]
We use convolutional neural networks to learn a generalizable mapping between perceptual inputs and a recently proposed psychological similarity space for the shape domain.
Our results indicate that a classification-based multi-task learning scenario yields the best results, but that its performance is relatively sensitive to the dimensionality of the similarity space.
arXiv Detail & Related papers (2021-11-16T12:21:07Z) - Similarity and Matching of Neural Network Representations [0.0]
We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks.
We aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer.
arXiv Detail & Related papers (2021-10-27T17:59:46Z) - Comparing Deep Neural Nets with UMAP Tour [12.910602784766562]
UMAP Tour is built to visually inspect and compare internal behavior of real-world neural network models.
We find concepts learned in state-of-the-art models and dissimilarities between them, such as GoogLeNet and ResNet.
arXiv Detail & Related papers (2021-10-18T15:59:13Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.