Systematic human learning and generalization from a brief tutorial with
explanatory feedback
- URL: http://arxiv.org/abs/2107.06994v2
- Date: Wed, 29 Mar 2023 02:15:18 GMT
- Title: Systematic human learning and generalization from a brief tutorial with
explanatory feedback
- Authors: Andrew J. Nam and James L. McClelland (Stanford University)
- Abstract summary: We investigate human adults' ability to learn an abstract reasoning task based on Sudoku.
We find that participants who master the task do so within a small number of trials and generalize well to puzzles outside of the training range.
We also find that most of those who master the task can describe a valid solution strategy, and such participants perform better on transfer puzzles than those whose strategy descriptions are vague or incomplete.
- Score: 3.7826494079172557
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural networks have long been used to model human intelligence, capturing
elements of behavior and cognition, and their neural basis. Recent advancements
in deep learning have enabled neural network models to reach and even surpass
human levels of intelligence in many respects, yet unlike humans, their ability
to learn new tasks quickly remains a challenge. People can reason not only in
familiar domains, but can also rapidly learn to reason through novel problems
and situations, raising the question of how well modern neural network models
capture human intelligence and in which ways they diverge. In this work, we
explore this gap by investigating human adults' ability to learn an abstract
reasoning task based on Sudoku from a brief instructional tutorial with
explanatory feedback for incorrect responses using a narrow range of training
examples. We find that participants who master the task do so within a small
number of trials and generalize well to puzzles outside of the training range.
We also find that most of those who master the task can describe a valid
solution strategy, and such participants perform better on transfer puzzles
than those whose strategy descriptions are vague or incomplete. Interestingly,
fewer than half of our human participants were successful in acquiring a valid
solution strategy, and this ability is associated with high school mathematics
education. We consider the challenges these findings pose for building
computational models that capture all aspects of our findings and point toward
a possible role for learning to engage in explanation-based reasoning to
support rapid learning and generalization.
Related papers
- Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - Learning to solve arithmetic problems with a virtual abacus [0.35911228556176483]
We introduce a deep reinforcement learning framework that allows to simulate how cognitive agents could learn to solve arithmetic problems.
The proposed model successfully learns to perform multi-digit additions and subtractions, achieving an error rate below 1%.
We analyze the most common error patterns to better understand the limitations and biases resulting from our design choices.
arXiv Detail & Related papers (2023-01-17T13:25:52Z) - Are Deep Neural Networks SMARTer than Second Graders? [85.60342335636341]
We evaluate the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed for children in the 6--8 age group.
Our dataset consists of 101 unique puzzles; each puzzle comprises a picture question, and their solution needs a mix of several elementary skills, including arithmetic, algebra, and spatial reasoning.
Experiments reveal that while powerful deep models offer reasonable performances on puzzles in a supervised setting, they are not better than random accuracy when analyzed for generalization.
arXiv Detail & Related papers (2022-12-20T04:33:32Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Towards continual task learning in artificial neural networks: current
approaches and insights from neuroscience [0.0]
The innate capacity of humans and other animals to learn a diverse, and often interfering, range of knowledge is a hallmark of natural intelligence.
The ability of artificial neural networks to learn across a range of tasks and domains is a clear goal of artificial intelligence.
arXiv Detail & Related papers (2021-12-28T13:50:51Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Thinking Deeply with Recurrence: Generalizing from Easy to Hard
Sequential Reasoning Problems [51.132938969015825]
We observe that recurrent networks have the uncanny ability to closely emulate the behavior of non-recurrent deep models.
We show that recurrent networks that are trained to solve simple mazes with few recurrent steps can indeed solve much more complex problems simply by performing additional recurrences during inference.
arXiv Detail & Related papers (2021-02-22T14:09:20Z) - Learning Transferable Concepts in Deep Reinforcement Learning [0.7161783472741748]
We show that learning discrete representations of sensory inputs can provide a high-level abstraction that is common across multiple tasks.
In particular, we show that it is possible to learn such representations by self-supervision, following an information theoretic approach.
Our method is able to learn concepts in locomotive and optimal control tasks that increase the sample efficiency in both known and unknown tasks.
arXiv Detail & Related papers (2020-05-16T04:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.