Zero-shot visual reasoning through probabilistic analogical mapping
- URL: http://arxiv.org/abs/2209.15087v1
- Date: Thu, 29 Sep 2022 20:29:26 GMT
- Title: Zero-shot visual reasoning through probabilistic analogical mapping
- Authors: Taylor W. Webb, Shuhao Fu, Trevor Bihl, Keith J. Holyoak, and Hongjing
Lu
- Abstract summary: We present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes two approaches.
We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task.
In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories.
- Score: 2.049767929976436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human reasoning is grounded in an ability to identify highly abstract
commonalities governing superficially dissimilar visual inputs. Recent efforts
to develop algorithms with this capacity have largely focused on approaches
that require extensive direct training on visual reasoning tasks, and yield
limited generalization to problems with novel content. In contrast, a long
tradition of research in cognitive science has focused on elucidating the
computational principles underlying human analogical reasoning; however, this
work has generally relied on manually constructed representations. Here we
present visiPAM (visual Probabilistic Analogical Mapping), a model of visual
reasoning that synthesizes these two approaches. VisiPAM employs learned
representations derived directly from naturalistic visual inputs, coupled with
a similarity-based mapping operation derived from cognitive theories of human
reasoning. We show that without any direct training, visiPAM outperforms a
state-of-the-art deep learning model on an analogical mapping task. In
addition, visiPAM closely matches the pattern of human performance on a novel
task involving mapping of 3D objects across disparate categories.
Related papers
- Dual Thinking and Perceptual Analysis of Deep Learning Models using Human Adversarial Examples [5.022336433202968]
The perception of dual thinking in vision requires images where inferences from intuitive and logical processing differ.
We introduce an adversarial dataset to provide evidence for the dual thinking framework in human vision.
Our study also addresses a major criticism of using classification models as computational models of human vision.
arXiv Detail & Related papers (2024-06-11T05:50:34Z) - Automatic Discovery of Visual Circuits [66.99553804855931]
We explore scalable methods for extracting the subgraph of a vision model's computational graph that underlies recognition of a specific visual concept.
We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
arXiv Detail & Related papers (2024-04-22T17:00:57Z) - Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption [64.07607726562841]
Existing multi-person human reconstruction approaches mainly focus on recovering accurate poses or avoiding penetration.
In this work, we tackle the task of reconstructing closely interactive humans from a monocular video.
We propose to leverage knowledge from proxemic behavior and physics to compensate the lack of visual information.
arXiv Detail & Related papers (2024-04-17T11:55:45Z) - Motion Mapping Cognition: A Nondecomposable Primary Process in Human
Vision [2.7195102129095003]
I present a basic cognitive process, motion mapping cognition (MMC), which should be a nondecomposable primary function in human vision.
MMC can be used to explain most of human visual functions in fundamental, but can not be effectively modelled by traditional visual processing ways.
I state that MMC may be looked as an extension of Chen's theory of topological perception on human vision, and seems to be unsolvable using existing intelligent algorithm skills.
arXiv Detail & Related papers (2024-02-02T10:11:25Z) - Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks [24.45212348373868]
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.
Our approach appends an unsupervised explanation generator to the primary classifier network and makes use of adversarial training.
This work presents a significant step towards building inherently interpretable deep vision models with task-aligned concept representations.
arXiv Detail & Related papers (2024-01-09T16:16:16Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - In-Context Analogical Reasoning with Pre-Trained Language Models [10.344428417489237]
We explore the use of intuitive language-based abstractions to support analogy in AI systems.
Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices ( RPM)
We find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods.
arXiv Detail & Related papers (2023-05-28T04:22:26Z) - Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements [0.0]
Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
arXiv Detail & Related papers (2022-06-21T17:59:23Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.