Smart Environment for Adaptive Learning of Cybersecurity Skills
- URL: http://arxiv.org/abs/2307.05281v1
- Date: Tue, 11 Jul 2023 14:20:29 GMT
- Title: Smart Environment for Adaptive Learning of Cybersecurity Skills
- Authors: Jan Vykopal, Pavel Seda, Valdemar \v{S}v\'abensk\'y, Pavel \v{C}eleda
- Abstract summary: We designed a unique and novel smart environment for adaptive training of cybersecurity skills.
The environment collects a variety of student data to assign a suitable learning path through the training.
We evaluated the learning environment using two different adaptive trainings attended by 114 students of various proficiency.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hands-on computing education requires a realistic learning environment that
enables students to gain and deepen their skills. Available learning
environments, including virtual and physical labs, provide students with
real-world computer systems but rarely adapt the learning environment to
individual students of various proficiency and background. We designed a unique
and novel smart environment for adaptive training of cybersecurity skills. The
environment collects a variety of student data to assign a suitable learning
path through the training. To enable such adaptiveness, we proposed, developed,
and deployed a new tutor model and a training format. We evaluated the learning
environment using two different adaptive trainings attended by 114 students of
various proficiency. The results show students were assigned tasks with a more
appropriate difficulty, which enabled them to successfully complete the
training. Students reported that they enjoyed the training, felt the training
difficulty was appropriately designed, and would attend more training sessions
like these. Instructors can use the environment for teaching any topic
involving real-world computer networks and systems because it is not tailored
to particular training. We freely released the software along with exemplary
training so that other instructors can adopt the innovations in their teaching
practice.
Related papers
- Eurekaverse: Environment Curriculum Generation via Large Language Models [45.087121551202735]
We introduce Eurekaverse, an unsupervised environment design algorithm that uses large language models to sample progressively more challenging environments for skill training.
We validate Eurekaverse's effectiveness in the domain of quadrupedal parkour learning, in which a quadruped robot must traverse through a variety of obstacle courses.
arXiv Detail & Related papers (2024-11-04T03:54:00Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Fully Online Meta-Learning Without Task Boundaries [80.09124768759564]
We study how meta-learning can be applied to tackle online problems of this nature.
We propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries.
Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods.
arXiv Detail & Related papers (2022-02-01T07:51:24Z) - Reinforcing Cybersecurity Hands-on Training With Adaptive Learning [0.5735035463793008]
This paper is one of the first works investigating adaptive learning in security training.
We analyze the performance of 95 students in 12 training sessions to understand the limitations of the current training practice.
We propose a novel tutor model for adaptive training, which considers students' proficiency before and during an ongoing training session.
arXiv Detail & Related papers (2022-01-05T12:35:40Z) - The Other Side of Black Screen: Rethinking Interaction in Synchronous
Remote Learning for Collaborative Programming [0.0]
Collaborative learning environments are crucial for learning experiential hands-on skills such as critical thinking and problem solving.
In this case study, we present observations of in-person and online versions of 2 programming courses offered before and during the COVID-19 pandemic.
We find that the current online video-conferencing platforms cannot foster collaborative learning among peers.
arXiv Detail & Related papers (2021-11-11T01:52:12Z) - Scalable Learning Environments for Teaching Cybersecurity Hands-on [0.4893345190925178]
This paper describes a technical innovation for scalable teaching of cybersecurity hands-on classes using interactive learning environments.
We present our research effort and practical experience in designing and using learning environments that scale up hands-on cybersecurity classes.
arXiv Detail & Related papers (2021-10-19T14:18:54Z) - Interaction-limited Inverse Reinforcement Learning [50.201765937436654]
We present two different training strategies: Curriculum Inverse Reinforcement Learning (CIRL) covering the teacher's perspective, and Self-Paced Inverse Reinforcement Learning (SPIRL) focusing on the learner's perspective.
Using experiments in simulations and experiments with a real robot learning a task from a human demonstrator, we show that our training strategies can allow a faster training than a random teacher for CIRL and than a batch learner for SPIRL.
arXiv Detail & Related papers (2020-07-01T12:31:52Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.