Model Human Learners: Computational Models to Guide Instructional Design
- URL: http://arxiv.org/abs/2502.02456v3
- Date: Sun, 09 Feb 2025 21:50:20 GMT
- Title: Model Human Learners: Computational Models to Guide Instructional Design
- Authors: Christopher J. MacLellan,
- Abstract summary: This paper shows that a computational model can accurately predict the outcomes of two human A/B experiments.
It also demonstrates that such a model can generate learning curves without requiring human data.
- Score: 0.37654756794131544
- License:
- Abstract: Instructional designers face an overwhelming array of design choices, making it challenging to identify the most effective interventions. To address this issue, I propose the concept of a Model Human Learner, a unified computational model of learning that can aid designers in evaluating candidate interventions. This paper presents the first successful demonstration of this concept, showing that a computational model can accurately predict the outcomes of two human A/B experiments -- one testing a problem sequencing intervention and the other testing an item design intervention. It also demonstrates that such a model can generate learning curves without requiring human data and provide theoretical insights into why an instructional intervention is effective. These findings lay the groundwork for future Model Human Learners that integrate cognitive and learning theories to support instructional design across diverse tasks and interventions.
Related papers
- Evaluating Alternative Training Interventions Using Personalized Computational Models of Learning [0.0]
evaluating different training interventions to determine which produce the best learning outcomes is one of the main challenges faced by instructional designers.
We present an approach for automatically tuning models to specific individuals and show that personalized models make better predictions of students' behavior than generic ones.
Our approach makes predictions that align with previous human findings, as well as testable predictions that might be evaluated with future human experiments.
arXiv Detail & Related papers (2024-08-24T22:51:57Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Designing Optimal Behavioral Experiments Using Machine Learning [8.759299724881219]
We provide a tutorial on leveraging recent advances in BOED and machine learning to find optimal experiments for any kind of model.
We consider theories of how people balance exploration and exploitation in multi-armed bandit decision-making tasks.
As compared to experimental designs commonly used in the literature, we show that our optimal designs more efficiently determine which of a set of models best account for individual human behavior.
arXiv Detail & Related papers (2023-05-12T18:24:30Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Procedure Planning in Instructional Videosvia Contextual Modeling and
Model-based Policy Learning [114.1830997893756]
This work focuses on learning a model to plan goal-directed actions in real-life videos.
We propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning.
arXiv Detail & Related papers (2021-10-05T01:06:53Z) - A Survey of Human-in-the-loop for Machine Learning [7.056132067948671]
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience.
This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.
arXiv Detail & Related papers (2021-08-02T14:42:28Z) - Human-Understandable Decision Making for Visual Recognition [30.30163407674527]
We propose a new framework to train a deep neural network by incorporating the prior of human perception into the model learning process.
The effectiveness of our proposed model is evaluated on two classical visual recognition tasks.
arXiv Detail & Related papers (2021-03-05T02:07:33Z) - A Theoretical Approach for a Novel Model to Realizing Empathy [0.0]
This paper introduces a theoretical concept as a proposed model that visualizes the process of realizing empathy.
The intended purpose of this proposed model, is to create an initial blueprint that may be applicable to a range of disciplines.
arXiv Detail & Related papers (2020-09-03T17:21:49Z) - A Competence-aware Curriculum for Visual Concepts Learning via Question
Answering [95.35905804211698]
We propose a competence-aware curriculum for visual concept learning in a question-answering manner.
We design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process.
Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-07-03T05:08:09Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.