General Intelligence Requires Rethinking Exploration
- URL: http://arxiv.org/abs/2211.07819v1
- Date: Tue, 15 Nov 2022 00:46:15 GMT
- Title: General Intelligence Requires Rethinking Exploration
- Authors: Minqi Jiang, Tim Rockt\"aschel, Edward Grefenstette
- Abstract summary: We argue that exploration is essential to all learning systems, including supervised learning.
Generalized exploration serves as a necessary objective for maintaining open-ended learning processes.
- Score: 24.980249597326985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are at the cusp of a transition from "learning from data" to "learning
what data to learn from" as a central focus of artificial intelligence (AI)
research. While the first-order learning problem is not completely solved,
large models under unified architectures, such as transformers, have shifted
the learning bottleneck from how to effectively train our models to how to
effectively acquire and use task-relevant data. This problem, which we frame as
exploration, is a universal aspect of learning in open-ended domains, such as
the real world. Although the study of exploration in AI is largely limited to
the field of reinforcement learning, we argue that exploration is essential to
all learning systems, including supervised learning. We propose the problem of
generalized exploration to conceptually unify exploration-driven learning
between supervised learning and reinforcement learning, allowing us to
highlight key similarities across learning settings and open research
challenges. Importantly, generalized exploration serves as a necessary
objective for maintaining open-ended learning processes, which in continually
learning to discover and solve new problems, provides a promising path to more
general intelligence.
Related papers
- O1 Replication Journey: A Strategic Progress Report -- Part 1 [52.062216849476776]
This paper introduces a pioneering approach to artificial intelligence research, embodied in our O1 Replication Journey.
Our methodology addresses critical challenges in modern AI research, including the insularity of prolonged team-based projects.
We propose the journey learning paradigm, which encourages models to learn not just shortcuts, but the complete exploration process.
arXiv Detail & Related papers (2024-10-08T15:13:01Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review [4.462334751640166]
This article reviews research studies that combine (two of) these learning algorithms.
Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network.
arXiv Detail & Related papers (2021-11-23T20:41:06Z) - A Survey of Exploration Methods in Reinforcement Learning [64.01676570654234]
Reinforcement learning agents depend crucially on exploration to obtain informative data for the learning process.
In this article, we provide a survey of modern exploration methods in (Sequential) reinforcement learning, as well as a taxonomy of exploration methods.
arXiv Detail & Related papers (2021-09-01T02:36:14Z) - Open-world Machine Learning: Applications, Challenges, and Opportunities [0.7734726150561086]
Open-world machine learning deals with arbitrary inputs (data with unseen classes) to machine learning systems.
Traditional machine learning is static learning which is not appropriate for an active environment.
This paper presents a systematic review of various techniques for open-world machine learning.
arXiv Detail & Related papers (2021-05-27T21:05:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.