A Feature-based Generalizable Prediction Model for Both Perceptual and
Abstract Reasoning
- URL: http://arxiv.org/abs/2403.05641v1
- Date: Fri, 8 Mar 2024 19:26:30 GMT
- Title: A Feature-based Generalizable Prediction Model for Both Perceptual and
Abstract Reasoning
- Authors: Quan Do, Thomas M. Morin, Chantal E. Stern, Michael E. Hasselmo
- Abstract summary: A hallmark of human intelligence is the ability to infer abstract rules from limited experience.
Recent advances in deep learning have led to multiple artificial neural network models matching or even surpassing human performance.
We present an algorithmic approach to rule detection and application using feature detection, affine transformation estimation and search.
- Score: 1.0650780147044159
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A hallmark of human intelligence is the ability to infer abstract rules from
limited experience and apply these rules to unfamiliar situations. This
capacity is widely studied in the visual domain using the Raven's Progressive
Matrices. Recent advances in deep learning have led to multiple artificial
neural network models matching or even surpassing human performance. However,
while humans can identify and express the rule underlying these tasks with
little to no exposure, contemporary neural networks often rely on massive
pattern-based training and cannot express or extrapolate the rule inferred from
the task. Furthermore, most Raven's Progressive Matrices or Raven-like tasks
used for neural network training used symbolic representations, whereas humans
can flexibly switch between symbolic and continuous perceptual representations.
In this work, we present an algorithmic approach to rule detection and
application using feature detection, affine transformation estimation and
search. We applied our model to a simplified Raven's Progressive Matrices task,
previously designed for behavioral testing and neuroimaging in humans. The
model exhibited one-shot learning and achieved near human-level performance in
the symbolic reasoning condition of the simplified task. Furthermore, the model
can express the relationships discovered and generate multi-step predictions in
accordance with the underlying rule. Finally, the model can reason using
continuous patterns. We discuss our results and their relevance to studying
abstract reasoning in humans, as well as their implications for improving
intelligent machines.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Distilling Symbolic Priors for Concept Learning into Neural Networks [9.915299875869046]
We show that inductive biases can be instantiated in artificial neural networks by distilling a prior distribution from a symbolic Bayesian model via meta-learning.
We use this approach to create a neural network with an inductive bias towards concepts expressed as short logical formulas.
arXiv Detail & Related papers (2024-02-10T20:06:26Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Deep Non-Monotonic Reasoning for Visual Abstract Reasoning Tasks [3.486683381782259]
This paper proposes a non-monotonic computational approach to solve visual abstract reasoning tasks.
We implement a deep learning model using this approach and tested it on the RAVEN dataset -- a dataset inspired by the Raven's Progressive Matrices test.
arXiv Detail & Related papers (2023-02-08T16:35:05Z) - Learning to Reason With Relational Abstractions [65.89553417442049]
We study how to build stronger reasoning capability in language models using the idea of relational abstractions.
We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy.
arXiv Detail & Related papers (2022-10-06T00:27:50Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision [44.32874972577682]
We investigate the extent to which neural models can reason about natural language rationales that explain model predictions.
We use pre-trained language models, neural knowledge models, and distant supervision from related tasks.
Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information.
arXiv Detail & Related papers (2020-12-14T23:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.