Evaluating Two Approaches to Assessing Student Progress in Cybersecurity
Exercises
- URL: http://arxiv.org/abs/2112.02053v1
- Date: Fri, 3 Dec 2021 18:08:27 GMT
- Title: Evaluating Two Approaches to Assessing Student Progress in Cybersecurity
Exercises
- Authors: Valdemar \v{S}v\'abensk\'y, Richard Weiss, Jack Cook, Jan Vykopal,
Pavel \v{C}eleda, Jens Mache, Radoslav Chudovsk\'y, Ankur Chattopadhyay
- Abstract summary: Students need to develop practical skills such as using command-line tools.
Hands-on exercises are the most direct way to assess students' mastery.
We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise.
- Score: 0.7329200485567825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cybersecurity students need to develop practical skills such as using
command-line tools. Hands-on exercises are the most direct way to assess these
skills, but assessing students' mastery is a challenging task for instructors.
We aim to alleviate this issue by modeling and visualizing student progress
automatically throughout the exercise. The progress is summarized by graph
models based on the shell commands students typed to achieve discrete tasks
within the exercise. We implemented two types of models and compared them using
data from 46 students at two universities. To evaluate our models, we surveyed
22 experienced computing instructors and qualitatively analyzed their
responses. The majority of instructors interpreted the graph models effectively
and identified strengths, weaknesses, and assessment use cases for each model.
Based on the evaluation, we provide recommendations to instructors and explain
how our graph models innovate teaching and promote further research. The impact
of this paper is threefold. First, it demonstrates how multiple institutions
can collaborate to share approaches to modeling student progress in hands-on
exercises. Second, our modeling techniques generalize to data from different
environments to support student assessment, even outside the cybersecurity
domain. Third, we share the acquired data and open-source software so that
others can use the models in their classes or research.
Related papers
- Detecting Unsuccessful Students in Cybersecurity Exercises in Two Different Learning Environments [0.37729165787434493]
This paper develops automated tools to predict when a student is having difficulty.
In a potential application, such models can aid instructors in detecting struggling students and providing targeted help.
arXiv Detail & Related papers (2024-08-16T04:57:54Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - EmbedDistill: A Geometric Knowledge Distillation for Information
Retrieval [83.79667141681418]
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR)
We propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model.
We show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.
arXiv Detail & Related papers (2023-01-27T22:04:37Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - DGEKT: A Dual Graph Ensemble Learning Method for Knowledge Tracing [20.71423236895509]
We present a novel Dual Graph Ensemble learning method for Knowledge Tracing (DGEKT)
DGEKT establishes a dual graph structure of students' learning interactions to capture the heterogeneous exercise-concept associations.
Online knowledge distillation provides its predictions on all exercises as extra supervision for better modeling ability.
arXiv Detail & Related papers (2022-11-23T11:37:35Z) - UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes [91.24112204588353]
We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks.
In contrast to previous models, UViM has the same functional form for all tasks.
We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks.
arXiv Detail & Related papers (2022-05-20T17:47:59Z) - Assessing the Knowledge State of Online Students -- New Data, New
Approaches, Improved Accuracy [28.719009375724028]
Student performance (SP) modeling is a critical step for building adaptive online teaching systems.
This study is the first to use four very large datasets made available recently from four distinct intelligent tutoring systems.
arXiv Detail & Related papers (2021-09-04T00:08:59Z) - Graph-based Exercise- and Knowledge-Aware Learning Network for Student
Performance Prediction [8.21303828329009]
We propose a Graph-based Exercise- and Knowledge-Aware Learning Network for accurate student score prediction.
We learn students' mastery of exercises and knowledge concepts respectively to model the two-fold effects of exercises and knowledge concepts.
arXiv Detail & Related papers (2021-06-01T06:53:17Z) - A Survey on Neural Recommendation: From Collaborative Filtering to
Content and Context Enriched Recommendation [70.69134448863483]
Research in recommendation has shifted to inventing new recommender models based on neural networks.
In recent years, we have witnessed significant progress in developing neural recommender models.
arXiv Detail & Related papers (2021-04-27T08:03:52Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.