Analyzing Student Strategies In Blended Courses Using Clickstream Data
- URL: http://arxiv.org/abs/2006.00421v1
- Date: Sun, 31 May 2020 03:01:00 GMT
- Title: Analyzing Student Strategies In Blended Courses Using Clickstream Data
- Authors: Nil-Jana Akpinar, Aaditya Ramdas, Umut Acar
- Abstract summary: We use pattern mining and models borrowed from Natural Language Processing to understand student interactions.
Fine-grained clickstream data is collected through Diderot, a non-commercial educational support system.
Our results suggest that the proposed hybrid NLP methods can provide valuable insights even in the low-data setting of blended courses.
- Score: 32.81171098036632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Educational software data promises unique insights into students' study
behaviors and drivers of success. While much work has been dedicated to
performance prediction in massive open online courses, it is unclear if the
same methods can be applied to blended courses and a deeper understanding of
student strategies is often missing. We use pattern mining and models borrowed
from Natural Language Processing (NLP) to understand student interactions and
extract frequent strategies from a blended college course. Fine-grained
clickstream data is collected through Diderot, a non-commercial educational
support system that spans a wide range of functionalities. We find that
interaction patterns differ considerably based on the assessment type students
are preparing for, and many of the extracted features can be used for reliable
performance prediction. Our results suggest that the proposed hybrid NLP
methods can provide valuable insights even in the low-data setting of blended
courses given enough data granularity.
Related papers
- Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability [67.77534983324229]
In this paper, we investigate the ability of Large Language Models to develop a unified compression method that discretizes uninformative tokens.
Experiments show Selection-p achieves state-of-the-art performance across numerous classification tasks.
It exhibits superior transferability to different models compared to prior work.
arXiv Detail & Related papers (2024-10-15T17:05:25Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - One-Shot Learning as Instruction Data Prospector for Large Language Models [108.81681547472138]
textscNuggets uses one-shot learning to select high-quality instruction data from extensive datasets.
We show that instruction tuning with the top 1% of examples curated by textscNuggets substantially outperforms conventional methods employing the entire dataset.
arXiv Detail & Related papers (2023-12-16T03:33:12Z) - Enhancing the Performance of Automated Grade Prediction in MOOC using
Graph Representation Learning [3.4882560718166626]
Massive Open Online Courses (MOOCs) have gained significant traction as a rapidly growing phenomenon in online learning.
Current automated assessment approaches overlook the structural links between different entities involved in the downstream tasks.
We construct a unique knowledge graph for a large MOOC dataset, which will be publicly available to the research community.
arXiv Detail & Related papers (2023-10-18T19:27:39Z) - Enhancing Student Performance Prediction on Learnersourced Questions
with SGNN-LLM Synergy [11.735587384038753]
We introduce an innovative strategy that synergizes the potential of integrating Signed Graph Neural Networks (SGNNs) and Large Language Model (LLM) embeddings.
Our methodology employs a signed bipartite graph to comprehensively model student answers, complemented by a contrastive learning framework that enhances noise resilience.
arXiv Detail & Related papers (2023-09-23T23:37:55Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Process-BERT: A Framework for Representation Learning on Educational
Process Data [68.8204255655161]
We propose a framework for learning representations of educational process data.
Our framework consists of a pre-training step that uses BERT-type objectives to learn representations from sequential process data.
We apply our framework to the 2019 nation's report card data mining competition dataset.
arXiv Detail & Related papers (2022-04-28T16:07:28Z) - DMCNet: Diversified Model Combination Network for Understanding
Engagement from Video Screengrabs [0.4397520291340695]
Engagement plays a major role in developing intelligent educational interfaces.
Non-deep learning models are based on the combination of popular algorithms such as Histogram of Oriented Gradient (HOG), Support Vector Machine (SVM), Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF)
The deep learning methods include Densely Connected Convolutional Networks (DenseNet-121), Residual Network (ResNet-18) and MobileNetV1.
arXiv Detail & Related papers (2022-04-13T15:24:38Z) - Multi-Pretext Attention Network for Few-shot Learning with
Self-supervision [37.6064643502453]
We propose a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample.
Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC.
We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.
arXiv Detail & Related papers (2021-03-10T10:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.