Insights into undergraduate pathways using course load analytics
- URL: http://arxiv.org/abs/2212.09974v1
- Date: Tue, 20 Dec 2022 03:28:41 GMT
- Title: Insights into undergraduate pathways using course load analytics
- Authors: Conrad Borchers and Zachary A. Pardos
- Abstract summary: We produce and evaluate the first machine-learned predictions of student course load ratings.
Students who maintain a semester load that is low as measured by credit hours but high as measured by CLA are more likely to leave their program of study.
- Score: 5.2432156904895155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Course load analytics (CLA) inferred from LMS and enrollment features can
offer a more accurate representation of course workload to students than credit
hours and potentially aid in their course selection decisions. In this study,
we produce and evaluate the first machine-learned predictions of student course
load ratings and generalize our model to the full 10,000 course catalog of a
large public university. We then retrospectively analyze longitudinal
differences in the semester load of student course selections throughout their
degree. CLA by semester shows that a student's first semester at the university
is among their highest load semesters, as opposed to a credit hour-based
analysis, which would indicate it is among their lowest. Investigating what
role predicted course load may play in program retention, we find that students
who maintain a semester load that is low as measured by credit hours but high
as measured by CLA are more likely to leave their program of study. This
discrepancy in course load is particularly pertinent in STEM and associated
with high prerequisite courses. Our findings have implications for academic
advising, institutional handling of the freshman experience, and student-facing
analytics to help students better plan, anticipate, and prepare for their
selected courses.
Related papers
- Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors [74.04775677110179]
In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs)
In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt.
Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead.
arXiv Detail & Related papers (2024-10-17T17:16:00Z) - Students Success Modeling: Most Important Factors [0.47829670123819784]
The model undertakes to identify students likely to graduate, the ones likely to transfer to a different school, and the ones likely to drop out and leave their higher education unfinished.
Our experiments demonstrate that distinguishing between to-be-graduate and at-risk students is reasonably achievable in the earliest stages.
The model remarkably foresees the fate of students who stay in the school for three years.
arXiv Detail & Related papers (2023-09-06T19:23:10Z) - Impacts of Students Academic Performance Trajectories on Final Academic
Success [0.0]
We apply a Hidden Markov Model (HMM) to provide a standard and intuitive classification over students' academic-performance levels.
Based on student transcript data from University of Central Florida, our proposed HMM is trained using sequences of students' course grades for each semester.
arXiv Detail & Related papers (2022-01-21T15:32:35Z) - Identifying Hubs in Undergraduate Course Networks Based on Scaled
Co-Enrollments: Extended Version [2.0796330979420836]
This study uses undergraduate student enrollment data to form networks of courses where connections are based on student co-enrollments.
The networks are analyzed to identify "hub" courses often taken with many other courses.
arXiv Detail & Related papers (2021-04-27T16:26:29Z) - Interleaving Computational and Inferential Thinking: Data Science for
Undergraduates at Berkeley [81.01051375191828]
The undergraduate data science curriculum at the University of California, Berkeley is anchored in five new courses.
These courses emphasize computational thinking, inferential thinking, and working on real-world problems.
These courses have become some of the most popular on campus and have led to a surging interest in a new undergraduate major and minor program in data science.
arXiv Detail & Related papers (2021-02-13T22:51:24Z) - What's the worth of having a single CS teacher program aimed at teachers
with heterogeneous profiles? [68.8204255655161]
We discuss the results of a 400-hour teacher training program conducted in Argentina aimed at K-12 teachers with no Computer Science background.
Our research aims at understanding whether a single teacher training program can be effective in teaching CS contents and specific pedagogy to teachers with very heterogeneous profiles.
arXiv Detail & Related papers (2020-11-09T15:03:31Z) - Using a Binary Classification Model to Predict the Likelihood of
Enrolment to the Undergraduate Program of a Philippine University [0.0]
This study covered an analysis of various characteristics of freshmen applicants affecting their admission status in a Philippine university.
A predictive model was developed using Logistic Regression to evaluate the probability that an admitted student will pursue to enroll in the Institution or not.
arXiv Detail & Related papers (2020-10-26T06:58:03Z) - A Survey on Curriculum Learning [48.36129047271622]
Curriculum learning (CL) is a training strategy that trains a machine learning model from easier data to harder data.
As an easy-to-use plug-in, the CL strategy has demonstrated its power in improving the generalization capacity and convergence rate of various models.
arXiv Detail & Related papers (2020-10-25T17:15:04Z) - Counterfactual Representation Learning with Balancing Weights [74.67296491574318]
Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
arXiv Detail & Related papers (2020-10-23T19:06:03Z) - Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities [56.1344233010643]
Several features are considered to contribute towards learner attrition or lack of interest, which may lead to disengagement or total dropout.
This study aims to predict dropout early-on, from the first week, by comparing several machine-learning approaches.
arXiv Detail & Related papers (2020-08-12T10:44:49Z) - Context-aware Non-linear and Neural Attentive Knowledge-based Models for
Grade Prediction [12.592903558338444]
Grade prediction for future courses not yet taken by students is important as it can help them and their advisers during the process of course selection.
One of the successful approaches for accurately predicting a student's grades in future courses is Cumulative Knowledge-based Regression Models (CKRM)
CKRM learns shallow linear models that predict a student's grades as the similarity between his/her knowledge state and the target course.
We propose context-aware non-linear and neural attentive models that can potentially better estimate a student's knowledge state from his/her prior course information.
arXiv Detail & Related papers (2020-03-09T20:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.