Evaluation of Machine Learning Models in Student Academic Performance Prediction
- URL: http://arxiv.org/abs/2506.08047v1
- Date: Sun, 08 Jun 2025 13:33:49 GMT
- Title: Evaluation of Machine Learning Models in Student Academic Performance Prediction
- Authors: A. G. R. Sandeepa, Sanka Mohottala,
- Abstract summary: This research investigates the use of machine learning methods to forecast students' academic performance in a school setting.<n>Students' data with behavioral, academic, and demographic details were used in implementations with standard classical machine learning models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This research investigates the use of machine learning methods to forecast students' academic performance in a school setting. Students' data with behavioral, academic, and demographic details were used in implementations with standard classical machine learning models including multi-layer perceptron classifier (MLPC). MLPC obtained 86.46% maximum accuracy for test set across all implementations. Under 10-fold cross validation, MLPC obtained 79.58% average accuracy for test set while for train set, it was 99.65%. MLP's better performance over other machine learning models strongly suggest the potential use of neural networks as data-efficient models. Feature selection approach played a crucial role in improving the performance and multiple evaluation approaches were used in order to compare with existing literature. Explainable machine learning methods were utilized to demystify the black box models and to validate the feature selection approach.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Machine Learning-Based Prediction of Metal-Organic Framework Materials: A Comparative Analysis of Multiple Models [2.089191490381739]
Metal-organic frameworks (MOFs) have emerged as promising materials for various applications.<n>This study presents a comprehensive investigation of machine learning approaches for predicting MOF material properties.
arXiv Detail & Related papers (2025-07-06T18:10:00Z) - A Hybrid Model for Few-Shot Text Classification Using Transfer and Meta-Learning [0.0]
This paper proposes a few-shot text classification model based on transfer learning and meta-learning.<n>Under the conditions of few samples and medium samples, the model based on transfer learning and meta-learning significantly outperforms traditional machine learning and deep learning methods.
arXiv Detail & Related papers (2025-02-13T09:00:32Z) - Evaluating Sample Utility for Efficient Data Selection by Mimicking Model Weights [11.237906163959908]
Multimodal models are trained on large-scale web-crawled datasets.<n>These datasets often contain noise, bias, and irrelevant information.<n>We propose an efficient, model-based approach using the Mimic Score.
arXiv Detail & Related papers (2025-01-12T04:28:14Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Large Language Model Enhanced Machine Learning Estimators for Classification [24.391150322835713]
Pre-trained large language models (LLM) have emerged as a powerful tool for simulating various scenarios.
We propose a few approaches to integrate LLM into a classical machine learning estimator to further enhance the prediction performance.
arXiv Detail & Related papers (2024-05-08T22:28:57Z) - A Mixture of Exemplars Approach for Efficient Out-of-Distribution Detection with Foundation Models [0.0]
This paper presents an efficient approach to tackling OOD detection that is designed to maximise the benefit of training with a high quality, frozen, pretrained foundation model.<n>MoLAR provides strong OOD performance when only comparing the similarity of OOD examples to the exemplars, a small set of images chosen to be representative of the dataset.
arXiv Detail & Related papers (2023-11-28T06:12:28Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.<n>On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.<n>On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Studying Drowsiness Detection Performance while Driving through Scalable
Machine Learning Models using Electroencephalography [0.0]
Driver drowsiness is one of the leading causes of traffic accidents.
Brain-Computer Interfaces (BCIs) and Machine Learning (ML) have enabled the detection of drivers' drowsiness.
This work presents an intelligent framework employing BCIs and features based on electroencephalography for detecting drowsiness in driving scenarios.
arXiv Detail & Related papers (2022-09-08T22:14:33Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.