Optimizing piano practice with a utility-based scaffold
- URL: http://arxiv.org/abs/2106.12937v1
- Date: Mon, 21 Jun 2021 14:05:00 GMT
- Title: Optimizing piano practice with a utility-based scaffold
- Authors: Alexandra Moringen, S\"oren R\"uttgers, Luisa Zintgraf, Jason
Friedman, Helge Ritter
- Abstract summary: A typical part of learning to play the piano is the progression through a series of practice units that focus on individual dimensions of the skill.
Because we each learn differently, and because there are many choices for possible piano practice tasks and methods, the set of practice tasks should be dynamically adapted to the human learner.
We present a modeling framework to guide the human learner through the learning process by choosing practice modes that have the highest expected utility.
- Score: 59.821144959060305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A typical part of learning to play the piano is the progression through a
series of practice units that focus on individual dimensions of the skill, such
as hand coordination, correct posture, or correct timing. Ideally, a focus on a
particular practice method should be made in a way to maximize the learner's
progress in learning to play the piano. Because we each learn differently, and
because there are many choices for possible piano practice tasks and methods,
the set of practice tasks should be dynamically adapted to the human learner.
However, having a human teacher guide individual practice is not always
feasible since it is time consuming, expensive, and not always available.
Instead, we suggest to optimize in the space of practice methods, the so-called
practice modes. The proposed optimization process takes into account the skills
of the individual learner and their history of learning. In this work we
present a modeling framework to guide the human learner through the learning
process by choosing practice modes that have the highest expected utility
(i.e., improvement in piano playing skill). To this end, we propose a human
learner utility model based on a Gaussian process, and exemplify the model
training and its application for practice scaffolding on an example of
simulated human learners.
Related papers
- Generating Piano Practice Policy with a Gaussian Process [42.41481706562645]
We present a modeling framework to guide the human learner through the learning process by choosing the practice modes generated by a policy model.
The proposed policy model is trained to approximate the expert-learner interaction during a practice session.
arXiv Detail & Related papers (2024-06-07T10:27:07Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - How To Guide Your Learner: Imitation Learning with Active Adaptive
Expert Involvement [20.91491585498749]
We propose a novel active imitation learning framework based on a teacher-student interaction model.
We show that AdapMen can improve the error bound and avoid compounding error under mild conditions.
arXiv Detail & Related papers (2023-03-03T16:44:33Z) - Skill-based Model-based Reinforcement Learning [18.758245582997656]
Model-based reinforcement learning (RL) is a sample-efficient way of learning complex behaviors.
We propose a Skill-based Model-based RL framework (SkiMo) that enables planning in the skill space.
We harness the learned skill dynamics model to accurately simulate and plan over long horizons in the skill space.
arXiv Detail & Related papers (2022-07-15T16:06:33Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Continual Predictive Learning from Videos [100.27176974654559]
We study a new continual learning problem in the context of video prediction.
We propose the continual predictive learning (CPL) approach, which learns a mixture world model via predictive experience replay.
We construct two new benchmarks based on RoboNet and KTH, in which different tasks correspond to different physical robotic environments or human actions.
arXiv Detail & Related papers (2022-04-12T08:32:26Z) - Interleaving Learning, with Application to Neural Architecture Search [12.317568257671427]
We propose a novel machine learning framework referred to as interleaving learning (IL)
In our framework, a set of models collaboratively learn a data encoder in an interleaving fashion.
We apply interleaving learning to search neural architectures for image classification on CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2021-03-12T00:54:22Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Accelerating Reinforcement Learning for Reaching using Continuous
Curriculum Learning [6.703429330486276]
We focus on accelerating reinforcement learning (RL) training and improving the performance of multi-goal reaching tasks.
Specifically, we propose a precision-based continuous curriculum learning (PCCL) method in which the requirements are gradually adjusted during the training process.
This approach is tested using a Universal Robot 5e in both simulation and real-world multi-goal reach experiments.
arXiv Detail & Related papers (2020-02-07T10:08:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.