Modeling the EdNet Dataset with Logistic Regression
- URL: http://arxiv.org/abs/2105.08150v1
- Date: Mon, 17 May 2021 20:30:36 GMT
- Title: Modeling the EdNet Dataset with Logistic Regression
- Authors: Philip I. Pavlik Jr, Luke G. Eglington
- Abstract summary: We describe our experience with competition from the perspective of educational data mining.
We discuss some basic results in the Kaggle system and our thoughts on how those results may have been improved.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many of these challenges are won by neural network models created by
full-time artificial intelligence scientists. Due to this origin, they have a
black-box character that makes their use and application less clear to learning
scientists. We describe our experience with competition from the perspective of
educational data mining, a field founded in the learning sciences and connected
with roots in psychology and statistics. We describe our efforts from the
perspectives of learning scientists and the challenges to our methods, some
real and some imagined. We also discuss some basic results in the Kaggle system
and our thoughts on how those results may have been improved. Finally, we
describe how learner model predictions are used to make pedagogical decisions
for students. Their practical use entails a) model predictions and b) a
decision rule (based on the predictions). We point out how increased model
accuracy can be of limited practical utility, especially when paired with
simple decision rules and argue instead for the need to further investigate
optimal decision rules.
Related papers
- Leveraging Pedagogical Theories to Understand Student Learning Process with Graph-based Reasonable Knowledge Tracing [11.082908318943248]
We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues.
We propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting.
arXiv Detail & Related papers (2024-06-07T10:14:30Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Evaluating the Explainers: Black-Box Explainable Machine Learning for
Student Success Prediction in MOOCs [5.241055914181294]
We implement five state-of-the-art methodologies for explaining black-box machine learning models.
We examine the strengths of each approach on the downstream task of student performance prediction.
Our results come to the concerning conclusion that the choice of explainer is an important decision.
arXiv Detail & Related papers (2022-07-01T17:09:17Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Interpretable Knowledge Tracing: Simple and Efficient Student Modeling
with Causal Relations [21.74631969428855]
Interpretable Knowledge Tracing (IKT) is a simple model that relies on three meaningful latent features.
IKT's prediction of future student performance is made using a Tree-Augmented Naive Bayes (TAN)
IKT has great potential for providing adaptive and personalized instructions with causal reasoning in real-world educational systems.
arXiv Detail & Related papers (2021-12-15T19:05:48Z) - Mixture of Linear Models Co-supervised by Deep Neural Networks [14.831346286039151]
We propose an approach to fill the gap between relatively simple explainable models and deep neural network (DNN) models.
Our main idea is a mixture of discriminative models that is trained with the guidance from a DNN.
arXiv Detail & Related papers (2021-08-05T02:08:35Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A Competence-aware Curriculum for Visual Concepts Learning via Question
Answering [95.35905804211698]
We propose a competence-aware curriculum for visual concept learning in a question-answering manner.
We design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process.
Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-07-03T05:08:09Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.