Skillearn: Machine Learning Inspired by Humans' Learning Skills
- URL: http://arxiv.org/abs/2012.04863v2
- Date: Fri, 12 Mar 2021 06:38:40 GMT
- Title: Skillearn: Machine Learning Inspired by Humans' Learning Skills
- Authors: Pengtao Xie, Xuefeng Du, Hao Ban
- Abstract summary: We are interested in investigating whether humans' learning skills can be borrowed to help machines learn better.
Specifically, we aim to formalize these skills and leverage them to train better machine learning (ML) models.
To achieve this goal, we develop a general framework -- Skillearn, which provides a principled way to represent humans' learning skills mathematically.
In two case studies, we apply Skillearn to formalize two learning skills of humans: learning by passing tests and interleaving learning, and use the formalized skills to improve neural architecture search.
- Score: 15.125072827275766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans, as the most powerful learners on the planet, have accumulated a lot
of learning skills, such as learning through tests, interleaving learning,
self-explanation, active recalling, to name a few. These learning skills and
methodologies enable humans to learn new topics more effectively and
efficiently. We are interested in investigating whether humans' learning skills
can be borrowed to help machines to learn better. Specifically, we aim to
formalize these skills and leverage them to train better machine learning (ML)
models. To achieve this goal, we develop a general framework -- Skillearn,
which provides a principled way to represent humans' learning skills
mathematically and use the formally-represented skills to improve the training
of ML models. In two case studies, we apply Skillearn to formalize two learning
skills of humans: learning by passing tests and interleaving learning, and use
the formalized skills to improve neural architecture search. Experiments on
various datasets show that trained using the skills formalized by Skillearn, ML
models achieve significantly better performance.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation [17.222197596599685]
We propose a Skill Learning approach that discovers composable behaviors by solving a large number of autonomously generated tasks.
Our method learns skills allowing the robot to consistently and robustly interact with objects in its environment.
The learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.
arXiv Detail & Related papers (2024-10-07T09:19:13Z) - Skill Reinforcement Learning and Planning for Open-World Long-Horizon
Tasks [31.084848672383185]
We study building multi-task agents in open-world environments.
We convert the multi-task learning problem into learning basic skills and planning over the skills.
Our method accomplishes 40 diverse Minecraft tasks, where many tasks require sequentially executing for more than 10 skills.
arXiv Detail & Related papers (2023-03-29T09:45:50Z) - Choreographer: Learning and Adapting Skills in Imagination [60.09911483010824]
We present Choreographer, a model-based agent that exploits its world model to learn and adapt skills in imagination.
Our method decouples the exploration and skill learning processes, being able to discover skills in the latent state space of the model.
Choreographer is able to learn skills both from offline data, and by collecting data simultaneously with an exploration policy.
arXiv Detail & Related papers (2022-11-23T23:31:14Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Active Hierarchical Imitation and Reinforcement Learning [0.0]
In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed.
Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.
arXiv Detail & Related papers (2020-12-14T08:27:27Z) - Accelerating Reinforcement Learning with Learned Skill Priors [20.268358783821487]
Most modern reinforcement learning approaches learn every task from scratch.
One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task.
We show that learned skill priors are essential for effective skill transfer from rich datasets.
arXiv Detail & Related papers (2020-10-22T17:59:51Z) - Emergent Real-World Robotic Skills via Unsupervised Off-Policy
Reinforcement Learning [81.12201426668894]
We develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks.
We show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible.
We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.
arXiv Detail & Related papers (2020-04-27T17:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.