Prospective Learning: Principled Extrapolation to the Future
- URL: http://arxiv.org/abs/2201.07372v2
- Date: Thu, 13 Jul 2023 09:49:53 GMT
- Title: Prospective Learning: Principled Extrapolation to the Future
- Authors: Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler,
Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam
Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M
Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva
Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey
E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan (Teresa)
Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns,
Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White,
Weiwei Yang, Andrei A. Rusu, Timothy Verstynen, Konrad P. Kording, Pratik
Chaudhari, Joshua T. Vogelstein
- Abstract summary: Learning evolve a process which can update decision rules based on past experience such that future performance improves.
Traditionally machine learning is often evaluated under the assumption that the future will be identical to the past in distribution or adversarial dynamics.
Here we reformulate the learning problem to one that centers around this idea of dynamic futures that are partially learnable.
We argue that prospective learning more accurately characterizes many real world problems that (1) currently lack adequate explanations for how natural intelligences solve them.
- Score: 45.287871145154135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning is a process which can update decision rules, based on past
experience, such that future performance improves. Traditionally, machine
learning is often evaluated under the assumption that the future will be
identical to the past in distribution or change adversarially. But these
assumptions can be either too optimistic or pessimistic for many problems in
the real world. Real world scenarios evolve over multiple spatiotemporal scales
with partially predictable dynamics. Here we reformulate the learning problem
to one that centers around this idea of dynamic futures that are partially
learnable. We conjecture that certain sequences of tasks are not
retrospectively learnable (in which the data distribution is fixed), but are
prospectively learnable (in which distributions may be dynamic), suggesting
that prospective learning is more difficult in kind than retrospective
learning. We argue that prospective learning more accurately characterizes many
real world problems that (1) currently stymie existing artificial intelligence
solutions and/or (2) lack adequate explanations for how natural intelligences
solve them. Thus, studying prospective learning will lead to deeper insights
and solutions to currently vexing challenges in both natural and artificial
intelligences.
Related papers
- Continual Learning: Applications and the Road Forward [119.03464063873407]
Continual learning aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
This work is the result of the many discussions the authors had at the Dagstuhl seminar on Deep Continual Learning, in March 2023.
arXiv Detail & Related papers (2023-11-20T16:40:29Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Bayesian Learning for Dynamic Inference [2.2843885788439793]
In some sequential estimation problems, the future values of the quantity to be estimated depend on the estimate of its current value.
We formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn.
We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss.
arXiv Detail & Related papers (2022-12-30T19:16:23Z) - Reinforcement Learning in System Identification [0.0]
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering.
Here we explore the use of Reinforcement Learning in this problem.
We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
arXiv Detail & Related papers (2022-12-14T09:20:42Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.