Variational Temporal IRT: Fast, Accurate, and Explainable Inference of
Dynamic Learner Proficiency
- URL: http://arxiv.org/abs/2311.08594v1
- Date: Tue, 14 Nov 2023 23:36:39 GMT
- Title: Variational Temporal IRT: Fast, Accurate, and Explainable Inference of
Dynamic Learner Proficiency
- Authors: Yunsung Kim, Sreechan Sankaranarayanan, Chris Piech, Candace Thille
- Abstract summary: We propose Variational Temporal IRT (VTIRT) for fast and accurate inference of learner proficiency.
VTIRT offers orders of magnitude speedup in inference runtime while still providing accurate inference.
When applied to 9 real student datasets, VTIRT consistently yields improvements in predicting future learner performance.
- Score: 5.715502630272047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Item Response Models extend the standard Item Response Theory (IRT)
to capture temporal dynamics in learner ability. While these models have the
potential to allow instructional systems to actively monitor the evolution of
learner proficiency in real time, existing dynamic item response models rely on
expensive inference algorithms that scale poorly to massive datasets. In this
work, we propose Variational Temporal IRT (VTIRT) for fast and accurate
inference of dynamic learner proficiency. VTIRT offers orders of magnitude
speedup in inference runtime while still providing accurate inference.
Moreover, the proposed algorithm is intrinsically interpretable by virtue of
its modular design. When applied to 9 real student datasets, VTIRT consistently
yields improvements in predicting future learner performance over other learner
proficiency models.
Related papers
- Temporal receptive field in dynamic graph learning: A comprehensive analysis [15.161255747900968]
We present a comprehensive analysis of the temporal receptive field in dynamic graph learning.
Our results demonstrate that appropriately chosen temporal receptive field can significantly enhance model performance.
For some models, overly large windows may introduce noise and reduce accuracy.
arXiv Detail & Related papers (2024-07-17T07:46:53Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Learning PDE Solution Operator for Continuous Modeling of Time-Series [1.39661494747879]
This work presents a partial differential equation (PDE) based framework which improves the dynamics modeling capability.
We propose a neural operator that can handle time continuously without requiring iterative operations or specific grids of temporal discretization.
Our framework opens up a new way for a continuous representation of neural networks that can be readily adopted for real-world applications.
arXiv Detail & Related papers (2023-02-02T03:47:52Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Learning Differential Operators for Interpretable Time Series Modeling [34.32259687441212]
We propose a learning framework that can automatically obtain interpretable PDE models from sequential data.
Our model can provide valuable interpretability and achieve comparable performance to state-of-the-art models.
arXiv Detail & Related papers (2022-09-03T20:14:31Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Variational Predictive Routing with Nested Subjective Timescales [1.6114012813668934]
We present Variational Predictive Routing (PRV) - a neural inference system that organizes latent video features in a temporal hierarchy.
We show that VPR is able to detect event boundaries, disentangletemporal features, adapt to the dynamics hierarchy of the data, and produce accurate time-agnostic rollouts of the future.
arXiv Detail & Related papers (2021-10-21T16:12:59Z) - Physics-Coupled Spatio-Temporal Active Learning for Dynamical Systems [15.923190628643681]
One of the major challenges is to infer the underlying causes, which generate the perceived data stream.
Success of machine learning based predictive models requires massive annotated data for model training.
Our experiments on both synthetic and real-world datasets exhibit that the proposed ST-PCNN with active learning converges to optimal accuracy with substantially fewer instances.
arXiv Detail & Related papers (2021-08-11T18:05:55Z) - Meta-learning using privileged information for dynamics [66.32254395574994]
We extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting.
We validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
arXiv Detail & Related papers (2021-04-29T12:18:02Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.