Predicting students' learning styles using regression techniques
- URL: http://arxiv.org/abs/2209.12691v1
- Date: Mon, 12 Sep 2022 16:04:51 GMT
- Title: Predicting students' learning styles using regression techniques
- Authors: Ahmad Mousa Altamimi, Mohammad Azzeh, Mahmoud Albashayreh
- Abstract summary: Online learning requires a personalization method because the interaction between learners and instructors is minimal.
One of the personalization methods is detecting the learners' learning style.
Current detection models become ineffective when learners have no dominant style or a mix of learning styles.
- Score: 0.4125187280299248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional learning systems have responded quickly to the COVID pandemic and
moved to online or distance learning. Online learning requires a
personalization method because the interaction between learners and instructors
is minimal, and learners have a specific learning method that works best for
them. One of the personalization methods is detecting the learners' learning
style. To detect learning styles, several works have been proposed using
classification techniques. However, the current detection models become
ineffective when learners have no dominant style or a mix of learning styles.
Thus, the objective of this study is twofold. Firstly, constructing a
prediction model based on regression analysis provides a probabilistic approach
for inferring the preferred learning style. Secondly, comparing regression
models and classification models for detecting learning style. To ground our
conceptual model, a set of machine learning algorithms have been implemented
based on a dataset collected from a sample of 72 students using visual,
auditory, reading/writing, and kinesthetic (VARK's) inventory questionnaire.
Results show that regression techniques are more accurate and representative
for real-world scenarios than classification algorithms, where students might
have multiple learning styles but with different probabilities. We believe that
this research will help educational institutes to engage learning styles in the
teaching process.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Learning Style Identification Using Semi-Supervised Self-Taught Labeling [0.0]
Education must be adaptable to sudden changes and disruptions caused by events like pandemics, war, and natural disasters related to climate change.
While learning management systems support teachers' productivity and creativity, they typically provide the same content to all learners in a course.
We propose a semi-supervised machine learning approach that detects students' learning styles using a data mining technique.
arXiv Detail & Related papers (2024-02-04T11:56:49Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Continual Learning in Open-vocabulary Classification with Complementary Memory Systems [19.337633598158778]
We introduce a method for flexible and efficient continual learning in open-vocabulary image classification.
We combine predictions from a CLIP zero-shot model and the exemplar-based model, using the zero-shot estimated probability that a sample's class is within the exemplar classes.
We also propose a "tree probe" method, an adaption of lazy learning principles, which enables fast learning from new examples with competitive accuracy to batch-trained linear models.
arXiv Detail & Related papers (2023-07-04T01:47:34Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Interleaving Learning, with Application to Neural Architecture Search [12.317568257671427]
We propose a novel machine learning framework referred to as interleaving learning (IL)
In our framework, a set of models collaboratively learn a data encoder in an interleaving fashion.
We apply interleaving learning to search neural architectures for image classification on CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2021-03-12T00:54:22Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z) - Deep Reinforcement Learning for Adaptive Learning Systems [4.8685842576962095]
We formulate the problem of how to find an individualized learning plan based on learner's latent traits.
We apply a model-free deep reinforcement learning algorithm that can effectively find the optimal learning policy.
We also develop a transition model estimator that emulates the learner's learning process using neural networks.
arXiv Detail & Related papers (2020-04-17T18:04:03Z) - Three Approaches for Personalization with Applications to Federated
Learning [68.19709953755238]
We present a systematic learning-theoretic study of personalization.
We provide learning-theoretic guarantees and efficient algorithms for which we also demonstrate the performance.
All of our algorithms are model-agnostic and work for any hypothesis class.
arXiv Detail & Related papers (2020-02-25T01:36:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.