Equitable Ability Estimation in Neurodivergent Student Populations with
Zero-Inflated Learner Models
- URL: http://arxiv.org/abs/2203.10170v2
- Date: Mon, 9 May 2022 12:21:16 GMT
- Title: Equitable Ability Estimation in Neurodivergent Student Populations with
Zero-Inflated Learner Models
- Authors: Niall Twomey, Sarah McMullan, Anat Elhalal, Rafael Poyiadzi, Luis
Vaquero
- Abstract summary: This paper attempts to model the relationships between context (delivery and response types) and performance of ND students with zero-inflated learner models.
This approach facilitates simulation of several expected ND behavioural traits, provides equitable ability estimates across all student groups from generated datasets, increases interpretability confidence, and can significantly increase the quality of learning opportunities for ND students.
- Score: 3.418206750929592
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: At present, the educational data mining community lacks many tools needed for
ensuring equitable ability estimation for Neurodivergent (ND) learners. On one
hand, most learner models are susceptible to under-estimating ND ability since
confounding contexts cannot be held accountable (eg consider dyslexia and
text-heavy assessments), and on the other, few (if any) existing datasets are
suited for appraising model and data bias in ND contexts. In this paper we
attempt to model the relationships between context (delivery and response
types) and performance of ND students with zero-inflated learner models. This
approach facilitates simulation of several expected ND behavioural traits,
provides equitable ability estimates across all student groups from generated
datasets, increases interpretability confidence, and can significantly increase
the quality of learning opportunities for ND students. Our approach
consistently out-performs baselines in our experiments and can also be applied
to many other learner modelling frameworks.
Related papers
- Self-Regulated Data-Free Knowledge Amalgamation for Text Classification [9.169836450935724]
We develop a lightweight student network that can learn from multiple teacher models without accessing their original training data.
To accomplish this, we propose STRATANET, a modeling framework that produces text data tailored to each teacher.
We evaluate our method on three benchmark text classification datasets with varying labels or domains.
arXiv Detail & Related papers (2024-06-16T21:13:30Z) - Neural Additive Models for Location Scale and Shape: A Framework for
Interpretable Neural Regression Beyond the Mean [1.0923877073891446]
Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks.
Despite this success, the inner workings of DNNs are often not transparent.
This lack of interpretability has led to increased research on inherently interpretable neural networks.
arXiv Detail & Related papers (2023-01-27T17:06:13Z) - Near-Negative Distinction: Giving a Second Life to Human Evaluation
Datasets [95.4182455942628]
We propose Near-Negative Distinction (NND) that repurposes prior human annotations into NND tests.
In an NND test, an NLG model must place higher likelihood on a high-quality output candidate than on a near-negative candidate with a known error.
We show that NND achieves higher correlation with human judgments than standard NLG evaluation metrics.
arXiv Detail & Related papers (2022-05-13T20:02:53Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Learning to be a Statistician: Learned Estimator for Number of Distinct
Values [54.629042119819744]
Estimating the number of distinct values (NDV) in a column is useful for many tasks in database systems.
In this work, we focus on how to derive accurate NDV estimations from random (online/offline) samples.
We propose to formulate the NDV estimation task in a supervised learning framework, and aim to learn a model as the estimator.
arXiv Detail & Related papers (2022-02-06T15:42:04Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Understanding Robustness in Teacher-Student Setting: A New Perspective [42.746182547068265]
Adrial examples are machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions.
Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness.
Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation.
arXiv Detail & Related papers (2021-02-25T20:54:24Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.