A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning
Using Contrastive Perceptual and Conceptual Processing
- URL: http://arxiv.org/abs/2309.10532v3
- Date: Fri, 20 Oct 2023 09:02:22 GMT
- Title: A Cognitively-Inspired Neural Architecture for Visual Abstract Reasoning
Using Contrastive Perceptual and Conceptual Processing
- Authors: Yuan Yang, Deepayan Sanyal, James Ainooson, Joel Michelson, Effat
Farhana, Maithilee Kunda
- Abstract summary: We introduce a new neural architecture for solving visual abstract reasoning tasks inspired by human cognition.
Inspired by this principle, our architecture models visual abstract reasoning as an iterative, self-contrasting learning process.
Experiments on the machine learning dataset RAVEN show that CPCNet achieves higher accuracy than all previously published models.
- Score: 14.201935774784632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new neural architecture for solving visual abstract reasoning
tasks inspired by human cognition, specifically by observations that human
abstract reasoning often interleaves perceptual and conceptual processing as
part of a flexible, iterative, and dynamic cognitive process. Inspired by this
principle, our architecture models visual abstract reasoning as an iterative,
self-contrasting learning process that pursues consistency between perceptual
and conceptual processing of visual stimuli. We explain how this new
Contrastive Perceptual-Conceptual Network (CPCNet) works using matrix reasoning
problems in the style of the well-known Raven's Progressive Matrices
intelligence test. Experiments on the machine learning dataset RAVEN show that
CPCNet achieves higher accuracy than all previously published models while also
using the weakest inductive bias. We also point out a substantial and
previously unremarked class imbalance in the original RAVEN dataset, and we
propose a new variant of RAVEN -- AB-RAVEN -- that is more balanced in terms of
abstract concepts.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks [24.45212348373868]
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.
Our approach appends an unsupervised explanation generator to the primary classifier network and makes use of adversarial training.
This work presents a significant step towards building inherently interpretable deep vision models with task-aligned concept representations.
arXiv Detail & Related papers (2024-01-09T16:16:16Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - The Relational Bottleneck as an Inductive Bias for Efficient Abstraction [3.19883356005403]
We show that neural networks are constrained via their architecture to focus on relations between perceptual inputs, rather than the attributes of individual inputs.
We review a family of models that employ this approach to induce abstractions in a data-efficient manner.
arXiv Detail & Related papers (2023-09-12T22:44:14Z) - Learning Differentiable Logic Programs for Abstract Visual Reasoning [18.82429807065658]
Differentiable forward reasoning has been developed to integrate reasoning with gradient-based machine learning paradigms.
NEUMANN is a graph-based differentiable forward reasoner, passing messages in a memory-efficient manner and handling structured programs with functors.
We demonstrate that NEUMANN solves visual reasoning tasks efficiently, outperforming neural, symbolic, and neuro-symbolic baselines.
arXiv Detail & Related papers (2023-07-03T11:02:40Z) - Concept-Based Explanations for Tabular Data [0.0]
We propose a concept-based explainability for Deep Neural Networks (DNNs)
We show the validity of our method in generating interpretability results that match the human-level intuitions.
We also propose a notion of fairness based on TCAV that quantifies what layer of DNN has learned representations that lead to biased predictions.
arXiv Detail & Related papers (2022-09-13T02:19:29Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - AIGenC: An AI generalisation model via creativity [1.933681537640272]
Inspired by cognitive theories of creativity, this paper introduces a computational model (AIGenC)
It lays down the necessary components to enable artificial agents to learn, use and generate transferable representations.
We discuss the model's capability to yield better out-of-distribution generalisation in artificial agents.
arXiv Detail & Related papers (2022-05-19T17:43:31Z) - Learning Algebraic Representation for Systematic Generalization in
Abstract Reasoning [109.21780441933164]
We propose a hybrid approach to improve systematic generalization in reasoning.
We showcase a prototype with algebraic representation for the abstract spatial-temporal task of Raven's Progressive Matrices (RPM)
We show that the algebraic representation learned can be decoded by isomorphism to generate an answer.
arXiv Detail & Related papers (2021-11-25T09:56:30Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.