Decomposed Inductive Procedure Learning
- URL: http://arxiv.org/abs/2110.13233v1
- Date: Mon, 25 Oct 2021 19:36:03 GMT
- Title: Decomposed Inductive Procedure Learning
- Authors: Daniel Weitekamp, Christopher MacLellan, Erik Harpstead, Kenneth
Koedinger
- Abstract summary: We formalize a theory of Decomposed Inductive Procedure Learning (DIPL)
DIPL outlines how different forms of inductive symbolic learning can be used to build agents that learn educationally relevant tasks.
We demonstrate that DIPL enables the creation of agents that exhibit human-like learning performance.
- Score: 2.421459418045937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in machine learning have made it possible to train
artificially intelligent agents that perform with super-human accuracy on a
great diversity of complex tasks. However, the process of training these
capabilities often necessitates millions of annotated examples -- far more than
humans typically need in order to achieve a passing level of mastery on similar
tasks. Thus, while contemporary methods in machine learning can produce agents
that exhibit super-human performance, their rate of learning per opportunity in
many domains is decidedly lower than human-learning. In this work we formalize
a theory of Decomposed Inductive Procedure Learning (DIPL) that outlines how
different forms of inductive symbolic learning can be used in combination to
build agents that learn educationally relevant tasks such as mathematical, and
scientific procedures, at a rate similar to human learners. We motivate the
construction of this theory along Marr's concepts of the computational,
algorithmic, and implementation levels of cognitive modeling, and outline at
the computational-level six learning capacities that must be achieved to
accurately model human learning. We demonstrate that agents built along the
DIPL theory are amenable to satisfying these capacities, and demonstrate, both
empirically and theoretically, that DIPL enables the creation of agents that
exhibit human-like learning performance.
Related papers
- Towards a Formal Theory of the Need for Competence via Computational Intrinsic Motivation [6.593505830504729]
We focus on the "need for competence", postulated as a key basic human need within Self-Determination Theory (SDT)
We propose that these inconsistencies may be alleviated by drawing on computational models from the field of reinforcement learning (RL)
Our work can support a cycle of theory development by inspiring new computational models formalising aspects of the theory, which can then be tested empirically to refine the theory.
arXiv Detail & Related papers (2025-02-11T10:03:40Z) - Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - Reconciling Different Theories of Learning with an Agent-based Model of Procedural Learning [0.27624021966289597]
We propose a new computational model of human learning, Procedural ABICAP, that reconciles the ICAP, Knowledge-Learning-Instruction, and cognitive load theory frameworks for learning procedural knowledge.
ICAP assumes that constructive learning generally yields better learning outcomes, while theories such as KLI and CLT claim that this is not always true.
arXiv Detail & Related papers (2024-08-23T20:45:14Z) - Large Language Models Need Consultants for Reasoning: Becoming an Expert in a Complex Human System Through Behavior Simulation [5.730580726163518]
Large language models (LLMs) have demonstrated remarkable capabilities comparable to humans in fields such as mathematics, law, coding, common sense, and world knowledge.
We propose a novel reasoning framework, termed Mosaic Expert Observation Wall'' (MEOW) exploiting generative-agents-based simulation technique.
arXiv Detail & Related papers (2024-03-27T03:33:32Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - NeuroCERIL: Robotic Imitation Learning via Hierarchical Cause-Effect
Reasoning in Programmable Attractor Neural Networks [2.0646127669654826]
We present NeuroCERIL, a brain-inspired neurocognitive architecture that uses a novel hypothetico-deductive reasoning procedure.
We show that NeuroCERIL can learn various procedural skills in a simulated robotic imitation learning domain.
We conclude that NeuroCERIL is a viable neural model of human-like imitation learning.
arXiv Detail & Related papers (2022-11-11T19:56:11Z) - Autonomous Reinforcement Learning: Formalism and Benchmarking [106.25788536376007]
Real-world embodied learning, such as that performed by humans and animals, is situated in a continual, non-episodic world.
Common benchmark tasks in RL are episodic, with the environment resetting between trials to provide the agent with multiple attempts.
This discrepancy presents a major challenge when attempting to take RL algorithms developed for episodic simulated environments and run them on real-world platforms.
arXiv Detail & Related papers (2021-12-17T16:28:06Z) - Active Hierarchical Imitation and Reinforcement Learning [0.0]
In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed.
Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.
arXiv Detail & Related papers (2020-12-14T08:27:27Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.