Improving Artificial Teachers by Considering How People Learn and Forget
- URL: http://arxiv.org/abs/2102.04174v1
- Date: Mon, 8 Feb 2021 13:05:58 GMT
- Title: Improving Artificial Teachers by Considering How People Learn and Forget
- Authors: Aur\'elien Nioche, Pierre-Alexandre Murena, Carlos de la Torre-Ortiz,
Antti Oulasvirta
- Abstract summary: The paper presents a novel model-based method for intelligent tutoring.
Model-based planning picks the best interventions via interactive learning of a user memory model's parameters.
- Score: 32.74828727144865
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The paper presents a novel model-based method for intelligent tutoring, with
particular emphasis on the problem of selecting teaching interventions in
interaction with humans. Whereas previous work has focused on either
personalization of teaching or optimization of teaching intervention sequences,
the proposed individualized model-based planning approach represents
convergence of these two lines of research. Model-based planning picks the best
interventions via interactive learning of a user memory model's parameters. The
approach is novel in its use of a cognitive model that can account for several
key individual- and material-specific characteristics related to
recall/forgetting, along with a planning technique that considers users'
practice schedules. Taking a rule-based approach as a baseline, the authors
evaluated the method's benefits in a controlled study of artificial teaching in
second-language vocabulary learning (N=53).
Related papers
- Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning [41.13568563835089]
We find that applying human habits of organizing and connecting information can serve as an efficient strategy when training deep learning models.
We propose a novel regularization loss function that encourages models to focus more on challenging knowledge areas.
arXiv Detail & Related papers (2024-10-06T01:30:40Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - "You might think about slightly revising the title": identifying hedges
in peer-tutoring interactions [1.0466434989449724]
Hedges play an important role in the management of conversational interaction.
We use a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges.
We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations.
arXiv Detail & Related papers (2023-06-18T12:47:54Z) - Deep Generative Models for Decision-Making and Control [4.238809918521607]
The dual purpose of this thesis is to study the reasons for these shortcomings and to propose solutions for the uncovered problems.
We highlight how inference techniques from the contemporary generative modeling toolbox, including beam search, can be reinterpreted as viable planning strategies for reinforcement learning problems.
arXiv Detail & Related papers (2023-06-15T01:54:30Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Unsupervised Domain Adaptive Person Re-Identification via Human Learning
Imitation [67.52229938775294]
In past years, researchers propose to utilize the teacher-student framework in their methods to decrease the domain gap between different person re-identification datasets.
Inspired by recent teacher-student framework based methods, we propose to conduct further exploration to imitate the human learning process from different aspects.
arXiv Detail & Related papers (2021-11-28T01:14:29Z) - RLTutor: Reinforcement Learning Based Adaptive Tutoring System by
Modeling Virtual Student with Fewer Interactions [10.34673089426247]
We propose a framework for optimizing teaching strategies by constructing a virtual model of the student.
Our results can serve as a buffer between theoretical instructional optimization and practical applications in e-learning systems.
arXiv Detail & Related papers (2021-07-31T15:42:03Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Knowledge-Grounded Dialogue Generation with Pre-trained Language Models [74.09352261943911]
We study knowledge-grounded dialogue generation with pre-trained language models.
We propose equipping response generation defined by a pre-trained language model with a knowledge selection module.
arXiv Detail & Related papers (2020-10-17T16:49:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.