Toolset for Collecting Shell Commands and Its Application in Hands-on
Cybersecurity Training
- URL: http://arxiv.org/abs/2112.11118v1
- Date: Tue, 21 Dec 2021 11:45:13 GMT
- Title: Toolset for Collecting Shell Commands and Its Application in Hands-on
Cybersecurity Training
- Authors: Valdemar \v{S}v\'abensk\'y, Jan Vykopal, Daniel Tovar\v{n}\'ak, Pavel
\v{C}eleda
- Abstract summary: We share the design and implementation of an open-source toolset for logging commands that students execute on Linux machines.
Compared to basic solutions, such as shell history files, the toolset's added value is threefold.
Data are instantly forwarded to central storage in a unified, semi-structured format.
- Score: 0.5735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When learning cybersecurity, operating systems, or networking, students
perform practical tasks using a broad range of command-line tools. Collecting
and analyzing data about the command usage can reveal valuable insights into
how students progress and where they make mistakes. However, few learning
environments support recording and inspecting command-line inputs, and setting
up an efficient infrastructure for this purpose is challenging. To aid
engineering and computing educators, we share the design and implementation of
an open-source toolset for logging commands that students execute on Linux
machines. Compared to basic solutions, such as shell history files, the
toolset's added value is threefold. 1) Its configuration is automated so that
it can be easily used in classes on different topics. 2) It collects metadata
about the command execution, such as a timestamp, hostname, and IP address. 3)
Data are instantly forwarded to central storage in a unified, semi-structured
format. This enables automated processing, both in real-time and post hoc, to
enhance the instructors' understanding of student actions. The toolset works
independently of the teaching content, the training network's topology, or the
number of students working in parallel. We demonstrated the toolset's value in
two learning environments at four training sessions. Over two semesters, 50
students played educational cybersecurity games using a Linux command-line
interface. Each training session lasted approximately two hours, during which
we recorded 4439 shell commands. The semi-automated data analysis revealed
solution patterns, used tools, and misconceptions of students. Our insights
from creating the toolset and applying it in teaching practice are relevant for
instructors, researchers, and developers of learning environments. We provide
the software and data resulting from this work so that others can use them.
Related papers
- Detecting Unsuccessful Students in Cybersecurity Exercises in Two Different Learning Environments [0.37729165787434493]
This paper develops automated tools to predict when a student is having difficulty.
In a potential application, such models can aid instructors in detecting struggling students and providing targeted help.
arXiv Detail & Related papers (2024-08-16T04:57:54Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Student Assessment in Cybersecurity Training Automated by Pattern Mining
and Clustering [0.5249805590164902]
This paper explores a dataset from 18 cybersecurity training sessions using data mining and machine learning techniques.
We employed pattern mining and clustering to analyze 8834 commands collected from 113 trainees.
Our results show that data mining methods are suitable for analyzing cybersecurity training data.
arXiv Detail & Related papers (2023-07-13T18:52:58Z) - ConvLab-3: A Flexible Dialogue System Toolkit Based on a Unified Data
Format [88.33443450434521]
Task-oriented dialogue (TOD) systems function as digital assistants, guiding users through various tasks such as booking flights or finding restaurants.
Existing toolkits for building TOD systems often fall short of in delivering comprehensive arrays of data, models, and experimental environments.
We introduce ConvLab-3: a multifaceted dialogue system toolkit crafted to bridge this gap.
arXiv Detail & Related papers (2022-11-30T16:37:42Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - rl_reach: Reproducible Reinforcement Learning Experiments for Robotic
Reaching Tasks [0.0]
We present rl_reach, a self-contained, open-source and easy-to-use software package.
It is designed to run reproducible reinforcement learning experiments for customisable robotic reaching tasks.
arXiv Detail & Related papers (2021-02-09T16:14:10Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.