Identifying Critical LMS Features for Predicting At-risk Students
- URL: http://arxiv.org/abs/2204.13700v1
- Date: Wed, 27 Apr 2022 22:43:45 GMT
- Title: Identifying Critical LMS Features for Predicting At-risk Students
- Authors: Ying Guo, Cengiz Gunay, Sairam Tangirala, David Kerven, Wei Jin, Jamye
Curry Savage and Seungjin Lee
- Abstract summary: Learning management systems (LMSs) have become essential in higher education.
We present an additional use of LMS by using its data logs to perform data-analytics and identify academically at-risk students.
- Score: 4.718094586237028
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning management systems (LMSs) have become essential in higher education
and play an important role in helping educational institutions to promote
student success. Traditionally, LMSs have been used by postsecondary
institutions in administration, reporting, and delivery of educational content.
In this paper, we present an additional use of LMS by using its data logs to
perform data-analytics and identify academically at-risk students. The
data-driven insights would allow educational institutions and educators to
develop and implement pedagogical interventions targeting academically at-risk
students. We used anonymized data logs created by Brightspace LMS during fall
2019, spring 2020, and fall 2020 semesters at our college. Supervised machine
learning algorithms were used to predict the final course performance of
students, and several algorithms were found to perform well with accuracy above
90%. SHAP value method was used to assess the relative importance of features
used in the predictive models. Unsupervised learning was also used to group
students into different clusters based on the similarities in their
interaction/involvement with LMS. In both of supervised and unsupervised
learning, we identified two most-important features
(Number_Of_Assignment_Submissions and Content_Completed). More importantly, our
study lays a foundation and provides a framework for developing a real-time
data analytics metric that may be incorporated into a LMS.
Related papers
- Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - Learning to Love Edge Cases in Formative Math Assessment: Using the AMMORE Dataset and Chain-of-Thought Prompting to Improve Grading Accuracy [0.0]
This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori.
We conduct two experiments to evaluate the use of large language models (LLM) for grading challenging student answers.
arXiv Detail & Related papers (2024-09-26T14:51:40Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - Analyzing LLM Usage in an Advanced Computing Class in India [4.580708389528142]
This study examines the use of large language models (LLMs) by undergraduate and graduate students for programming assignments in advanced computing classes.
We conducted a comprehensive analysis involving 411 students from a Distributed Systems class at an Indian university.
arXiv Detail & Related papers (2024-04-06T12:06:56Z) - Characterization of Large Language Model Development in the Datacenter [55.9909258342639]
Large Language Models (LLMs) have presented impressive performance across several transformative tasks.
However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs.
We present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme.
arXiv Detail & Related papers (2024-03-12T13:31:14Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Enhancing E-Learning System Through Learning Management System (LMS)
Technologies: Reshape The Learner Experience [0.0]
This E-Learning System can fit any educational needs as follows: chat, virtual classes, supportive resources for the students, individual and group monitoring, and assessment using LMS as maximum efficiency.
arXiv Detail & Related papers (2023-09-01T02:19:08Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - A Machine Learning system to monitor student progress in educational
institutes [0.0]
We propose a data driven approach that makes use of Machine Learning techniques to generate a classifier called credit score.
The proposal to use credit score as progress indicator is well suited to be used in a Learning Management System.
arXiv Detail & Related papers (2022-11-02T08:24:08Z) - Student-centric Model of Learning Management System Activity and
Academic Performance: from Correlation to Causation [2.169383034643496]
In recent years, there is a lot of interest in modeling students' digital traces in Learning Management System (LMS) to understand students' learning behavior patterns.
This paper explores a student-centric analytical framework for LMS activity data that can provide not only correlational but causal insights mined from observational data.
We envision that those insights will provide convincing evidence for college student support groups to launch student-centered and targeted interventions.
arXiv Detail & Related papers (2022-10-27T14:08:25Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.