CyberBOT: Towards Reliable Cybersecurity Education via Ontology-Grounded Retrieval Augmented Generation
- URL: http://arxiv.org/abs/2504.00389v1
- Date: Tue, 01 Apr 2025 03:19:22 GMT
- Title: CyberBOT: Towards Reliable Cybersecurity Education via Ontology-Grounded Retrieval Augmented Generation
- Authors: Chengshuai Zhao, Riccardo De Maria, Tharindu Kumarage, Kumar Satvik Chaudhary, Garima Agrawal, Yiwen Li, Jongchan Park, Yuli Deng, Ying-Chih Chen, Huan Liu,
- Abstract summary: In cybersecurity education, accuracy and safety are paramount, and systems must go beyond surface-level relevance to provide information that is both trustworthy and domain-appropriate.<n>We introduce CyberBOT, a question-answering robot that incorporates contextual information from course-specific materials and validate responses using a domain-specific cybersecurity ontology.<n>CyberBOT has been deployed in a large graduate-level course at Arizona State University, where more than one hundred students actively engage with the system through a dedicated web-based platform.
- Score: 13.352385179504482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in large language models (LLMs) have enabled the development of intelligent educational tools that support inquiry-based learning across technical domains. In cybersecurity education, where accuracy and safety are paramount, systems must go beyond surface-level relevance to provide information that is both trustworthy and domain-appropriate. To address this challenge, we introduce CyberBOT, a question-answering chatbot that leverages a retrieval-augmented generation (RAG) pipeline to incorporate contextual information from course-specific materials and validate responses using a domain-specific cybersecurity ontology. The ontology serves as a structured reasoning layer that constrains and verifies LLM-generated answers, reducing the risk of misleading or unsafe guidance. CyberBOT has been deployed in a large graduate-level course at Arizona State University (ASU), where more than one hundred students actively engage with the system through a dedicated web-based platform. Computational evaluations in lab environments highlight the potential capacity of CyberBOT, and a forthcoming field study will evaluate its pedagogical impact. By integrating structured domain reasoning with modern generative capabilities, CyberBOT illustrates a promising direction for developing reliable and curriculum-aligned AI applications in specialized educational contexts.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - A Case Study in Gamification for a Cybersecurity Education Program: A Game for Cryptography [0.0]
Gamification offers an innovative approach to provide practical hands-on experiences.<n>This paper presents a real-world case study of a gamified cryptography teaching tool.
arXiv Detail & Related papers (2025-02-10T17:36:46Z) - Generative AI Misuse Potential in Cyber Security Education: A Case Study of a UK Degree Program [1.9217872171227137]
This paper investigates the susceptibility of a Master's-level cyber security degree program at a UK Russell Group university to LLM misuse.<n>We identify a high exposure to misuse, particularly in independent project- and report-based assessments.<n>To address these challenges, we discuss the adoption of LLM-resistant assessments, detection tools, and the importance of fostering an ethical learning environment.
arXiv Detail & Related papers (2025-01-22T13:44:44Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Ontology-Aware RAG for Improved Question-Answering in Cybersecurity Education [13.838970688067725]
AI-driven question-answering (QA) systems can actively manage uncertainty in cybersecurity problem-solving.<n>Large language models (LLMs) have gained prominence in AI-driven QA systems, offering advanced language understanding and user engagement.<n>We propose CyberRAG, an ontology-aware retrieval-augmented generation (RAG) approach for developing a reliable and safe QA system in cybersecurity education.
arXiv Detail & Related papers (2024-12-10T21:52:35Z) - Networking Systems for Video Anomaly Detection: A Tutorial and Survey [55.28514053969056]
Video Anomaly Detection (VAD) is a fundamental research task within the Artificial Intelligence (AI) community.
With the advancements in deep learning and edge computing, VAD has made significant progress.
This article offers an exhaustive tutorial for novices in NSVAD.
arXiv Detail & Related papers (2024-05-16T02:00:44Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.<n>PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - A Survey on Explainable Artificial Intelligence for Cybersecurity [14.648580959079787]
Explainable Artificial Intelligence (XAI) aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions.
In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats.
arXiv Detail & Related papers (2023-03-07T22:54:18Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Pervasive AI for IoT Applications: Resource-efficient Distributed
Artificial Intelligence [45.076180487387575]
Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services.
This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams.
The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems.
arXiv Detail & Related papers (2021-05-04T23:42:06Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.