A Feature-based Generalizable Prediction Model for Both Perceptual and
Abstract Reasoning
- URL: http://arxiv.org/abs/2403.05641v1
- Date: Fri, 8 Mar 2024 19:26:30 GMT
- Title: A Feature-based Generalizable Prediction Model for Both Perceptual and
Abstract Reasoning
- Authors: Quan Do, Thomas M. Morin, Chantal E. Stern, Michael E. Hasselmo
- Abstract summary: A hallmark of human intelligence is the ability to infer abstract rules from limited experience.
Recent advances in deep learning have led to multiple artificial neural network models matching or even surpassing human performance.
We present an algorithmic approach to rule detection and application using feature detection, affine transformation estimation and search.
- Score: 1.0650780147044159
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A hallmark of human intelligence is the ability to infer abstract rules from
limited experience and apply these rules to unfamiliar situations. This
capacity is widely studied in the visual domain using the Raven's Progressive
Matrices. Recent advances in deep learning have led to multiple artificial
neural network models matching or even surpassing human performance. However,
while humans can identify and express the rule underlying these tasks with
little to no exposure, contemporary neural networks often rely on massive
pattern-based training and cannot express or extrapolate the rule inferred from
the task. Furthermore, most Raven's Progressive Matrices or Raven-like tasks
used for neural network training used symbolic representations, whereas humans
can flexibly switch between symbolic and continuous perceptual representations.
In this work, we present an algorithmic approach to rule detection and
application using feature detection, affine transformation estimation and
search. We applied our model to a simplified Raven's Progressive Matrices task,
previously designed for behavioral testing and neuroimaging in humans. The
model exhibited one-shot learning and achieved near human-level performance in
the symbolic reasoning condition of the simplified task. Furthermore, the model
can express the relationships discovered and generate multi-step predictions in
accordance with the underlying rule. Finally, the model can reason using
continuous patterns. We discuss our results and their relevance to studying
abstract reasoning in humans, as well as their implications for improving
intelligent machines.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Distilling Symbolic Priors for Concept Learning into Neural Networks [9.915299875869046]
We show that inductive biases can be instantiated in artificial neural networks by distilling a prior distribution from a symbolic Bayesian model via meta-learning.
We use this approach to create a neural network with an inductive bias towards concepts expressed as short logical formulas.
arXiv Detail & Related papers (2024-02-10T20:06:26Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Evaluating alignment between humans and neural network representations in image-based learning tasks [5.657101730705275]
We tested how well the representations of $86$ pretrained neural network models mapped to human learning trajectories.
We found that while training dataset size was a core determinant of alignment with human choices, contrastive training with multi-modal data (text and imagery) was a common feature of currently publicly available models that predicted human generalisation.
In conclusion, pretrained neural networks can serve to extract representations for cognitive models, as they appear to capture some fundamental aspects of cognition that are transferable across tasks.
arXiv Detail & Related papers (2023-06-15T08:18:29Z) - Learning to Reason With Relational Abstractions [65.89553417442049]
We study how to build stronger reasoning capability in language models using the idea of relational abstractions.
We find that models that are supplied with such sequences as prompts can solve tasks with a significantly higher accuracy.
arXiv Detail & Related papers (2022-10-06T00:27:50Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision [44.32874972577682]
We investigate the extent to which neural models can reason about natural language rationales that explain model predictions.
We use pre-trained language models, neural knowledge models, and distant supervision from related tasks.
Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information.
arXiv Detail & Related papers (2020-12-14T23:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.