Reinforcing Cybersecurity Hands-on Training With Adaptive Learning
- URL: http://arxiv.org/abs/2201.01574v1
- Date: Wed, 5 Jan 2022 12:35:40 GMT
- Title: Reinforcing Cybersecurity Hands-on Training With Adaptive Learning
- Authors: Pavel Seda, Jan Vykopal, Valdemar \v{S}v\'abensk\'y, Pavel \v{C}eleda
- Abstract summary: This paper is one of the first works investigating adaptive learning in security training.
We analyze the performance of 95 students in 12 training sessions to understand the limitations of the current training practice.
We propose a novel tutor model for adaptive training, which considers students' proficiency before and during an ongoing training session.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents how learning experience influences students' capability
to learn and their motivation for learning. Although each student is different,
standard instruction methods do not adapt to individuals. Adaptive learning
reverses this practice and attempts to improve the student experience. While
adaptive learning is well-established in programming, it is rarely used in
cybersecurity education. This paper is one of the first works investigating
adaptive learning in security training. First, we analyze the performance of 95
students in 12 training sessions to understand the limitations of the current
training practice. Less than half of the students completed the training
without displaying a solution, and only in two sessions, all students completed
all phases. Then, we simulate how students would proceed in one of the past
training sessions if it would offer more paths of various difficulty. Based on
this simulation, we propose a novel tutor model for adaptive training, which
considers students' proficiency before and during an ongoing training session.
The proficiency is assessed using a pre-training questionnaire and various
in-training metrics. Finally, we conduct a study with 24 students and new
training using the proposed tutor model and adaptive training format. The
results show that the adaptive training does not overwhelm students as the
original static training. Adaptive training enables students to enter several
alternative training phases with lower difficulty than the original training.
The proposed format is not restricted to a particular training. Therefore, it
can be applied to practicing any security topic or even in related fields, such
as networking or operating systems. Our study indicates that adaptive learning
is a promising approach for improving the student experience in security
education. We also highlight implications for educational practice.
Related papers
- YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Smart Environment for Adaptive Learning of Cybersecurity Skills [0.5735035463793008]
We designed a unique and novel smart environment for adaptive training of cybersecurity skills.
The environment collects a variety of student data to assign a suitable learning path through the training.
We evaluated the learning environment using two different adaptive trainings attended by 114 students of various proficiency.
arXiv Detail & Related papers (2023-07-11T14:20:29Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Effective Vision Transformer Training: A Data-Centric Perspective [24.02488085447691]
Vision Transformers (ViTs) have shown promising performance compared with Convolutional Neural Networks (CNNs)
In this paper, we define several metrics, including Dynamic Data Proportion (DDP) and Knowledge Assimilation Rate (KAR)
We propose a novel data-centric ViT training framework to dynamically measure the difficulty'' of training samples and generate effective'' samples for models at different training stages.
arXiv Detail & Related papers (2022-09-29T17:59:46Z) - Friendly Training: Neural Networks Can Adapt Data To Make Learning
Easier [23.886422706697882]
We propose a novel training procedure named Friendly Training.
We show that Friendly Training yields improvements with respect to informed data sub-selection and random selection.
Results suggest that adapting the input data is a feasible way to stabilize learning and improve the skills generalization of the network.
arXiv Detail & Related papers (2021-06-21T10:50:34Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z) - Interaction-limited Inverse Reinforcement Learning [50.201765937436654]
We present two different training strategies: Curriculum Inverse Reinforcement Learning (CIRL) covering the teacher's perspective, and Self-Paced Inverse Reinforcement Learning (SPIRL) focusing on the learner's perspective.
Using experiments in simulations and experiments with a real robot learning a task from a human demonstrator, we show that our training strategies can allow a faster training than a random teacher for CIRL and than a batch learner for SPIRL.
arXiv Detail & Related papers (2020-07-01T12:31:52Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z) - Side-Tuning: A Baseline for Network Adaptation via Additive Side
Networks [95.51368472949308]
Adaptation can be useful in cases when training data is scarce, or when one wishes to encode priors in the network.
In this paper, we propose a straightforward alternative: side-tuning.
arXiv Detail & Related papers (2019-12-31T18:52:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.