Visual analogy: Deep learning versus compositional models
- URL: http://arxiv.org/abs/2105.07065v1
- Date: Fri, 14 May 2021 20:56:02 GMT
- Title: Visual analogy: Deep learning versus compositional models
- Authors: Nicholas Ichien, Qing Liu, Shuhao Fu, Keith J. Holyoak, Alan Yuille,
Hongjing Lu
- Abstract summary: We compare human performance on visual analogies with the performance of alternative computational models.
Human reasoners achieved above-chance accuracy for all problem types, but made more errors in several conditions.
The compositional model based on part representations, but not the deep learning models, generated qualitative performance similar to that of human reasoners.
- Score: 3.2435333321661983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Is analogical reasoning a task that must be learned to solve from scratch by
applying deep learning models to massive numbers of reasoning problems? Or are
analogies solved by computing similarities between structured representations
of analogs? We address this question by comparing human performance on visual
analogies created using images of familiar three-dimensional objects (cars and
their subregions) with the performance of alternative computational models.
Human reasoners achieved above-chance accuracy for all problem types, but made
more errors in several conditions (e.g., when relevant subregions were
occluded). We compared human performance to that of two recent deep learning
models (Siamese Network and Relation Network) directly trained to solve these
analogy problems, as well as to that of a compositional model that assesses
relational similarity between part-based representations. The compositional
model based on part representations, but not the deep learning models,
generated qualitative performance similar to that of human reasoners.
Related papers
- Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance [0.0]
We test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans.
Our experiments find that models are able to learn analogical reasoning, even with a small amount of data.
arXiv Detail & Related papers (2023-10-09T10:34:38Z) - Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects [92.80955339180119]
mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
arXiv Detail & Related papers (2022-12-02T07:00:53Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - Similarity between Units of Natural Language: The Transition from Coarse
to Fine Estimation [0.0]
Capturing the similarities between human language units is crucial for explaining how humans associate different objects.
My research goal in this thesis is to develop regression models that account for similarities between language units in a more refined way.
arXiv Detail & Related papers (2022-10-25T18:54:32Z) - Similarity of Neural Architectures using Adversarial Attack Transferability [47.66096554602005]
We design a quantitative and scalable similarity measure between neural architectures.
We conduct a large-scale analysis on 69 state-of-the-art ImageNet classifiers.
Our results provide insights into why developing diverse neural architectures with distinct components is necessary.
arXiv Detail & Related papers (2022-10-20T16:56:47Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Towards Visually Explaining Similarity Models [29.704524987493766]
We present a method to generate gradient-based visual attention for image similarity predictors.
By relying solely on the learned feature embedding, we show that our approach can be applied to any kind of CNN-based similarity architecture.
We show that our resulting attention maps serve more than just interpretability; they can be infused into the model learning process itself with new trainable constraints.
arXiv Detail & Related papers (2020-08-13T17:47:41Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - Building and Interpreting Deep Similarity Models [0.0]
We propose to make similarities interpretable by augmenting them with an explanation in terms of input features.
We develop BiLRP, a scalable and theoretically founded method to systematically decompose similarity scores on pairs of input features.
arXiv Detail & Related papers (2020-03-11T17:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.