Evaluating the Explainers: Black-Box Explainable Machine Learning for
Student Success Prediction in MOOCs
- URL: http://arxiv.org/abs/2207.00551v1
- Date: Fri, 1 Jul 2022 17:09:17 GMT
- Title: Evaluating the Explainers: Black-Box Explainable Machine Learning for
Student Success Prediction in MOOCs
- Authors: Vinitra Swamy, Bahar Radmehr, Natasa Krco, Mirko Marras, Tanja K\"aser
- Abstract summary: We implement five state-of-the-art methodologies for explaining black-box machine learning models.
We examine the strengths of each approach on the downstream task of student performance prediction.
Our results come to the concerning conclusion that the choice of explainer is an important decision.
- Score: 5.241055914181294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are ubiquitous in applied machine learning for education.
Their pervasive success in predictive performance comes alongside a severe
weakness, the lack of explainability of their decisions, especially relevant in
human-centric fields. We implement five state-of-the-art methodologies for
explaining black-box machine learning models (LIME, PermutationSHAP,
KernelSHAP, DiCE, CEM) and examine the strengths of each approach on the
downstream task of student performance prediction for five massive open online
courses. Our experiments demonstrate that the families of explainers do not
agree with each other on feature importance for the same Bidirectional LSTM
models with the same representative set of students. We use Principal Component
Analysis, Jensen-Shannon distance, and Spearman's rank-order correlation to
quantitatively cross-examine explanations across methods and courses.
Furthermore, we validate explainer performance across curriculum-based
prerequisite relationships. Our results come to the concerning conclusion that
the choice of explainer is an important decision and is in fact paramount to
the interpretation of the predictive results, even more so than the course the
model is trained on. Source code and models are released at
http://github.com/epfl-ml4ed/evaluating-explainers.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Trusting the Explainers: Teacher Validation of Explainable Artificial
Intelligence for Course Design [5.725477071353353]
This work focuses on the context of online and blended learning and the use case of student success prediction models.
We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses.
We quantitatively compare the distances between the explanations across courses and methods.
We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators.
arXiv Detail & Related papers (2022-12-17T21:26:22Z) - Harnessing the Power of Explanations for Incremental Training: A
LIME-Based Approach [6.244905619201076]
In this work, model explanations are fed back to the feed-forward training to help the model generalize better.
The framework incorporates the custom weighted loss with Elastic Weight Consolidation (EWC) to maintain performance in sequential testing sets.
The proposed custom training procedure results in a consistent enhancement of accuracy ranging from 0.5% to 1.5% throughout all phases of the incremental learning setup.
arXiv Detail & Related papers (2022-11-02T18:16:17Z) - Machine Learning in Sports: A Case Study on Using Explainable Models for
Predicting Outcomes of Volleyball Matches [0.0]
This paper explores a two-phased Explainable Artificial Intelligence(XAI) approach to predict outcomes of matches in the Brazilian volleyball League (SuperLiga)
In the first phase, we directly use the interpretable rule-based ML models that provide a global understanding of the model's behaviors.
In the second phase, we construct non-linear models such as Support Vector Machine (SVM) and Deep Neural Network (DNN) to obtain predictive performance on the volleyball matches' outcomes.
arXiv Detail & Related papers (2022-06-18T18:09:15Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - A framework for predicting, interpreting, and improving Learning
Outcomes [0.0]
We develop an Embibe Score Quotient model (ESQ) to predict test scores based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as offer personalized learning nudges.
arXiv Detail & Related papers (2020-10-06T11:22:27Z) - Adversarial Infidelity Learning for Model Interpretation [43.37354056251584]
We propose a Model-agnostic Effective Efficient Direct (MEED) IFS framework for model interpretation.
Our framework mitigates concerns about sanity, shortcuts, model identifiability, and information transmission.
Our AIL mechanism can help learn the desired conditional distribution between selected features and targets.
arXiv Detail & Related papers (2020-06-09T16:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.