Meta-Learned Models of Cognition
- URL: http://arxiv.org/abs/2304.06729v1
- Date: Wed, 12 Apr 2023 16:30:51 GMT
- Title: Meta-Learned Models of Cognition
- Authors: Marcel Binz, Ishita Dasgupta, Akshay Jagadish, Matthew Botvinick, Jane
X. Wang, Eric Schulz
- Abstract summary: Meta-learning is a framework for learning algorithms through repeated interactions with an environment.
This article aims to establish a coherent research program around meta-learned models of cognition.
- Score: 11.488249464936422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning is a framework for learning learning algorithms through
repeated interactions with an environment as opposed to designing them by hand.
In recent years, this framework has established itself as a promising tool for
building models of human cognition. Yet, a coherent research program around
meta-learned models of cognition is still missing. The purpose of this article
is to synthesize previous work in this field and establish such a research
program. We rely on three key pillars to accomplish this goal. We first point
out that meta-learning can be used to construct Bayes-optimal learning
algorithms. This result not only implies that any behavioral phenomenon that
can be explained by a Bayesian model can also be explained by a meta-learned
model but also allows us to draw strong connections to the rational analysis of
cognition. We then discuss several advantages of the meta-learning framework
over traditional Bayesian methods. In particular, we argue that meta-learning
can be applied to situations where Bayesian inference is impossible and that it
enables us to make rational models of cognition more realistic, either by
incorporating limited computational resources or neuroscientific knowledge.
Finally, we reexamine prior studies from psychology and neuroscience that have
applied meta-learning and put them into the context of these new insights. In
summary, our work highlights that meta-learning considerably extends the scope
of rational analysis and thereby of cognitive theories more generally.
Related papers
- Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms [15.621144215664769]
We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.
Our main advantage is that it allows for more flexibility in how the transfer of knowledge between tasks is realized.
arXiv Detail & Related papers (2024-02-06T15:00:08Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - A Metamodel and Framework for Artificial General Intelligence From
Theory to Practice [11.756425327193426]
This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation.
We have applied the metamodel to problems ranging from time series analysis, computer vision, and natural language understanding.
One surprising consequence of the metamodel is that it not only enables a new level of autonomous learning and optimal functioning for machine intelligences.
arXiv Detail & Related papers (2021-02-11T16:45:58Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.