Turning CVEs into Educational Labs:Insights and Challenges
- URL: http://arxiv.org/abs/2509.10488v1
- Date: Sat, 30 Aug 2025 15:47:54 GMT
- Title: Turning CVEs into Educational Labs:Insights and Challenges
- Authors: Trueye Tafese,
- Abstract summary: This research focuses on transforming CVEs to hands-on educational lab for cybersecurity training.<n>The study shows the practical application of CVEs by developing containerized lab environments- Docker to simulate real-world vulnerabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This research focuses on transforming CVEs to hands-on educational lab for cybersecurity training. The study shows the practical application of CVEs by developing containerized lab environments- Docker to simulate real-world vulnerabilities like SQL Injection, arbitrary code execution, and improper SSL certificate validation. These labs has structured tutorials, pre- and post-surveys to evaluate learning outcomes, and remediation steps.Key challenges included interpreting limited CVE data, resolving technical complexities in lab design, and ensuring accessibility for diverse learners. Despite these difficulties, the findings highlight the use of educational benefits of vulnerability analysis, bridging theoretical concepts with hands-on experience. The results indicate that students improved comprehension of cybersecurity principles, threat mitigation techniques, and secure coding practices. This innovative approach provides a scalable and reproducible model for integrating CVEs into cybersecurity education, fostering a deeper understanding of real-world security challenges in a controlled and safe environment.
Related papers
- Capability-Oriented Training Induced Alignment Risk [101.37328448441208]
We investigate whether language models, when trained with reinforcement learning, will spontaneously learn to exploit flaws to maximize their reward.<n>Our experiments show that models consistently learn to exploit these vulnerabilities, discovering opportunistic strategies that significantly increase their reward at the expense of task correctness or safety.<n>Our findings suggest that future AI safety work must extend beyond content moderation to rigorously auditing and securing the training environments and reward mechanisms themselves.
arXiv Detail & Related papers (2026-02-12T16:13:14Z) - Enabling Cyber Security Education through Digital Twins and Generative AI [1.2619493260255112]
Digital Twins (DTs) are gaining prominence in cybersecurity for their ability to replicate complex IT infrastructures.<n>This study investigates how integrating DTs with penetration testing tools and Large Language Models (LLMs) can enhance cybersecurity education.
arXiv Detail & Related papers (2025-07-23T13:55:35Z) - Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report [50.268821168513654]
We present Foundation-Sec-8B, a cybersecurity-focused large language model (LLMs) built on the Llama 3.1 architecture.<n>We evaluate it across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks.<n>By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
arXiv Detail & Related papers (2025-04-28T08:41:12Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - A Case Study in Gamification for a Cybersecurity Education Program: A Game for Cryptography [0.0]
Gamification offers an innovative approach to provide practical hands-on experiences.<n>This paper presents a real-world case study of a gamified cryptography teaching tool.
arXiv Detail & Related papers (2025-02-10T17:36:46Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs [78.99703366417661]
Large language models (LLMs) increasingly assist in tasks ranging from procedural guidance to autonomous experiment orchestration.<n>Such overreliance is particularly dangerous in high-stakes laboratory settings, where failures in hazard identification or risk assessment can result in severe accidents.<n>We propose the Laboratory Safety Benchmark (LabSafety Bench) to evaluate models on their ability to identify potential hazards, assess risks, and predict the consequences of unsafe actions in lab environments.
arXiv Detail & Related papers (2024-10-18T05:21:05Z) - Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity [0.0]
This paper focuses on the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks.<n>Our testbed, built using Raspberry Pi and Nvidia Jetson hardware by running the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration.
arXiv Detail & Related papers (2024-09-15T17:04:25Z) - Teaching DevOps Security Education with Hands-on Labware: Automated Detection of Security Weakness in Python [4.280051038571455]
We introduce hands-on learning modules that enable learners to be familiar with identifying known security weaknesses.
To cultivate an engaging and motivating learning environment, our hands-on approach includes a pre-lab, hands-on and post lab sections.
arXiv Detail & Related papers (2023-08-14T16:09:05Z) - Ensemble learning techniques for intrusion detection system in the
context of cybersecurity [0.0]
Intrusion Detection System concept was used with the application of the Data Mining and Machine Learning Orange tool to obtain better results.
The main objective of the study was to investigate the Ensemble Learning technique using the Stacking method, supported by the Support Vector Machine (SVM) and kNearest Neighbour (kNN) algorithms.
arXiv Detail & Related papers (2022-12-21T10:50:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.