Identifying Critical LMS Features for Predicting At-risk Students
- URL: http://arxiv.org/abs/2204.13700v1
- Date: Wed, 27 Apr 2022 22:43:45 GMT
- Title: Identifying Critical LMS Features for Predicting At-risk Students
- Authors: Ying Guo, Cengiz Gunay, Sairam Tangirala, David Kerven, Wei Jin, Jamye
Curry Savage and Seungjin Lee
- Abstract summary: Learning management systems (LMSs) have become essential in higher education.
We present an additional use of LMS by using its data logs to perform data-analytics and identify academically at-risk students.
- Score: 4.718094586237028
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning management systems (LMSs) have become essential in higher education
and play an important role in helping educational institutions to promote
student success. Traditionally, LMSs have been used by postsecondary
institutions in administration, reporting, and delivery of educational content.
In this paper, we present an additional use of LMS by using its data logs to
perform data-analytics and identify academically at-risk students. The
data-driven insights would allow educational institutions and educators to
develop and implement pedagogical interventions targeting academically at-risk
students. We used anonymized data logs created by Brightspace LMS during fall
2019, spring 2020, and fall 2020 semesters at our college. Supervised machine
learning algorithms were used to predict the final course performance of
students, and several algorithms were found to perform well with accuracy above
90%. SHAP value method was used to assess the relative importance of features
used in the predictive models. Unsupervised learning was also used to group
students into different clusters based on the similarities in their
interaction/involvement with LMS. In both of supervised and unsupervised
learning, we identified two most-important features
(Number_Of_Assignment_Submissions and Content_Completed). More importantly, our
study lays a foundation and provides a framework for developing a real-time
data analytics metric that may be incorporated into a LMS.
Related papers
- Meta-Statistical Learning: Supervised Learning of Statistical Inference [59.463430294611626]
This work demonstrates that the tools and principles driving the success of large language models (LLMs) can be repurposed to tackle distribution-level tasks.
We propose meta-statistical learning, a framework inspired by multi-instance learning that reformulates statistical inference tasks as supervised learning problems.
arXiv Detail & Related papers (2025-02-17T18:04:39Z) - From Selection to Generation: A Survey of LLM-based Active Learning [153.8110509961261]
Large Language Models (LLMs) have been employed for generating entirely new data instances and providing more cost-effective annotations.
This survey aims to serve as an up-to-date resource for researchers and practitioners seeking to gain an intuitive understanding of LLM-based AL techniques.
arXiv Detail & Related papers (2025-02-17T12:58:17Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.
It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.
GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - LLM-SEM: A Sentiment-Based Student Engagement Metric Using LLMS for E-Learning Platforms [0.0]
LLM-SEM (Language Model-Based Student Engagement Metric) is a novel approach that leverages video metadata and sentiment analysis of student comments to measure engagement.
We generate high-quality sentiment predictions to mitigate text fuzziness and normalize key features such as views and likes.
Our holistic method combines comprehensive metadata with sentiment polarity scores to gauge engagement at both the course and lesson levels.
arXiv Detail & Related papers (2024-12-18T12:01:53Z) - Scalable Early Childhood Reading Performance Prediction [5.413138072912236]
There are no suitable publicly available educational datasets for modeling and predicting future reading performance.
In this work, we introduce the Enhanced Core Reading Instruction ECRI dataset.
We leverage the dataset to empirically evaluate the ability of state-of-the-art machine learning models to recognize early childhood educational patterns.
arXiv Detail & Related papers (2024-12-05T18:59:50Z) - Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - Analyzing LLM Usage in an Advanced Computing Class in India [4.580708389528142]
This study examines the use of large language models (LLMs) by undergraduate and graduate students for programming assignments in advanced computing classes.
We conducted a comprehensive analysis involving 411 students from a Distributed Systems class at an Indian university.
arXiv Detail & Related papers (2024-04-06T12:06:56Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Enhancing E-Learning System Through Learning Management System (LMS)
Technologies: Reshape The Learner Experience [0.0]
This E-Learning System can fit any educational needs as follows: chat, virtual classes, supportive resources for the students, individual and group monitoring, and assessment using LMS as maximum efficiency.
arXiv Detail & Related papers (2023-09-01T02:19:08Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Student-centric Model of Learning Management System Activity and
Academic Performance: from Correlation to Causation [2.169383034643496]
In recent years, there is a lot of interest in modeling students' digital traces in Learning Management System (LMS) to understand students' learning behavior patterns.
This paper explores a student-centric analytical framework for LMS activity data that can provide not only correlational but causal insights mined from observational data.
We envision that those insights will provide convincing evidence for college student support groups to launch student-centered and targeted interventions.
arXiv Detail & Related papers (2022-10-27T14:08:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.