A Theory of Human-Like Few-Shot Learning
- URL: http://arxiv.org/abs/2301.01047v1
- Date: Tue, 3 Jan 2023 11:22:37 GMT
- Title: A Theory of Human-Like Few-Shot Learning
- Authors: Zhiying Jiang, Rui Wang, Dongbo Bu, Ming Li
- Abstract summary: We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle.
We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory.
- Score: 14.271690184738205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We aim to bridge the gap between our common-sense few-sample human learning
and large-data machine learning. We derive a theory of human-like few-shot
learning from von-Neuman-Landauer's principle. modelling human learning is
difficult as how people learn varies from one to another. Under commonly
accepted definitions, we prove that all human or animal few-shot learning, and
major models including Free Energy Principle and Bayesian Program Learning that
model such learning, approximate our theory, under Church-Turing thesis. We
find that deep generative model like variational autoencoder (VAE) can be used
to approximate our theory and perform significantly better than baseline models
including deep neural networks, for image recognition, low resource language
processing, and character recognition.
Related papers
- The Role of Higher-Order Cognitive Models in Active Learning [8.847360368647752]
We advocate for a new paradigm for active learning for human feedback.
We discuss how increasing level of agency results in qualitatively different forms of rational communication between an active learning system and a teacher.
arXiv Detail & Related papers (2024-01-09T07:39:36Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Humans are not Boltzmann Distributions: Challenges and Opportunities for
Modelling Human Feedback and Interaction in Reinforcement Learning [13.64577704565643]
We argue that these models are too simplistic and that RL researchers need to develop more realistic human models to design and evaluate their algorithms.
This paper calls for research from different disciplines to address key questions about how humans provide feedback to AIs and how we can build more robust human-in-the-loop RL systems.
arXiv Detail & Related papers (2022-06-27T13:58:51Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Deep Learning is Singular, and That's Good [31.985399645173022]
In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied.
This is significant for deep learning as neural networks are singular and thus "dividing" by the determinant of the Hessian or employing the Laplace approximation are not appropriate.
Despite its potential for addressing fundamental issues in deep learning, singular learning theory appears to have made little inroads into the developing canon of deep learning theory.
arXiv Detail & Related papers (2020-10-22T09:33:59Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.