What are the mechanisms underlying metacognitive learning?
- URL: http://arxiv.org/abs/2302.04840v1
- Date: Thu, 9 Feb 2023 18:49:10 GMT
- Title: What are the mechanisms underlying metacognitive learning?
- Authors: Ruiqi He, Falk Lieder
- Abstract summary: We postulate that people learn this ability from trial and error (metacognitive reinforcement learning)
Here, we systematize models of the underlying learning mechanisms and enhance them with more sophisticated additional mechanisms.
Our results suggest that a gradient ascent through the space of cognitive strategies can explain most of the observed qualitative phenomena.
- Score: 5.787117733071415
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: How is it that humans can solve complex planning tasks so efficiently despite
limited cognitive resources? One reason is its ability to know how to use its
limited computational resources to make clever choices. We postulate that
people learn this ability from trial and error (metacognitive reinforcement
learning). Here, we systematize models of the underlying learning mechanisms
and enhance them with more sophisticated additional mechanisms. We fit the
resulting 86 models to human data collected in previous experiments where
different phenomena of metacognitive learning were demonstrated and performed
Bayesian model selection. Our results suggest that a gradient ascent through
the space of cognitive strategies can explain most of the observed qualitative
phenomena, and is therefore a promising candidate for explaining the mechanism
underlying metacognitive learning.
Related papers
- Non-equilibrium physics: from spin glasses to machine and neural
learning [0.0]
Disordered many-body systems exhibit a wide range of emergent phenomena across different scales.
We aim to characterize such emergent intelligence in disordered systems through statistical physics.
We uncover relationships between learning mechanisms and physical dynamics that could serve as guiding principles for designing intelligent systems.
arXiv Detail & Related papers (2023-08-03T04:56:47Z) - On Physical Origins of Learning [0.0]
We propose that learning may have non-biological and non-evolutionary origin.
It turns out that key properties of learning can be observed, explained, and accurately reproduced within simple physical models.
arXiv Detail & Related papers (2023-07-27T19:45:19Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Memory-Augmented Theory of Mind Network [59.9781556714202]
Social reasoning requires the capacity of theory of mind (ToM) to contextualise and attribute mental states to others.
Recent machine learning approaches to ToM have demonstrated that we can train the observer to read the past and present behaviours of other agents.
We tackle the challenges by equipping the observer with novel neural memory mechanisms to encode, and hierarchical attention to selectively retrieve information about others.
This results in ToMMY, a theory of mind model that learns to reason while making little assumptions about the underlying mental processes.
arXiv Detail & Related papers (2023-01-17T14:48:58Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Towards Benchmarking Explainable Artificial Intelligence Methods [0.0]
We use philosophy of science theories as an analytical lens with the goal of revealing, what can be expected, and more importantly, not expected, from methods that aim to explain decisions promoted by a neural network.
By conducting a case study we investigate a selection of explainability method's performance over two mundane domains, animals and headgear.
We lay bare that the usefulness of these methods relies on human domain knowledge and our ability to understand, generalise and reason.
arXiv Detail & Related papers (2022-08-25T14:28:30Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Have I done enough planning or should I plan more? [0.7734726150561086]
We show that people acquire this ability through learning and reverse-engineer the underlying learning mechanisms.
We find that people quickly adapt how much planning they perform to the cost and benefit of planning.
Our results suggest that the metacognitive ability to adjust the amount of planning might be learned through a policy-gradient mechanism.
arXiv Detail & Related papers (2022-01-03T17:11:07Z) - Hierarchical principles of embodied reinforcement learning: A review [11.613306236691427]
We show that all important cognitive mechanisms have been implemented independently in isolated computational architectures.
We expect our results to guide the development of more sophisticated cognitively inspired hierarchical methods.
arXiv Detail & Related papers (2020-12-18T10:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.