Pandora: A Cyber Range Environment for the Safe Testing and Deployment
of Autonomous Cyber Attack Tools
- URL: http://arxiv.org/abs/2009.11484v1
- Date: Thu, 24 Sep 2020 04:38:47 GMT
- Title: Pandora: A Cyber Range Environment for the Safe Testing and Deployment
of Autonomous Cyber Attack Tools
- Authors: Hetong Jiang, Taejun Choi, Ryan K. L. Ko
- Abstract summary: Pandora is a safe testing environment which allows security researchers and cyber range users to perform experiments on automated cyber attack tools.
Unlike existing testbeds and cyber ranges which have direct compatibility with enterprise computer systems, our test system is intentionally designed to be incompatible with enterprise real-world computing systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cybersecurity tools are increasingly automated with artificial intelligent
(AI) capabilities to match the exponential scale of attacks, compensate for the
relatively slower rate of training new cybersecurity talents, and improve of
the accuracy and performance of both tools and users. However, the safe and
appropriate usage of autonomous cyber attack tools - especially at the
development stages for these tools - is still largely an unaddressed gap. Our
survey of current literature and tools showed that most of the existing cyber
range designs are mostly using manual tools and have not considered augmenting
automated tools or the potential security issues caused by the tools. In other
words, there is still room for a novel cyber range design which allow security
researchers to safely deploy autonomous tools and perform automated tool
testing if needed. In this paper, we introduce Pandora, a safe testing
environment which allows security researchers and cyber range users to perform
experiments on automated cyber attack tools that may have strong potential of
usage and at the same time, a strong potential for risks. Unlike existing
testbeds and cyber ranges which have direct compatibility with enterprise
computer systems and the potential for risk propagation across the enterprise
network, our test system is intentionally designed to be incompatible with
enterprise real-world computing systems to reduce the risk of attack
propagation into actual infrastructure. Our design also provides a tool to
convert in-development automated cyber attack tools into to executable test
binaries for validation and usage realistic enterprise system environments if
required. Our experiments tested automated attack tools on our proposed system
to validate the usability of our proposed environment. Our experiments also
proved the safety of our environment by compatibility testing using simple
malicious code.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model [0.0]
Cybersecurity software tool evaluation is difficult due to the inherently adversarial nature of the field.
This paper proposes the use of a 'perfect' network, representing computing systems, a network and the attack pathways through it as a methodology to use for testing cybersecurity decision-making tools.
arXiv Detail & Related papers (2024-09-13T20:21:28Z) - The Impact of SBOM Generators on Vulnerability Assessment in Python: A Comparison and a Novel Approach [56.4040698609393]
Software Bill of Materials (SBOM) has been promoted as a tool to increase transparency and verifiability in software composition.
Current SBOM generation tools often suffer from inaccuracies in identifying components and dependencies.
We propose PIP-sbom, a novel pip-inspired solution that addresses their shortcomings.
arXiv Detail & Related papers (2024-09-10T10:12:37Z) - BreachSeek: A Multi-Agent Automated Penetration Tester [0.0]
BreachSeek is an AI-driven multi-agent software platform that identifies and exploits vulnerabilities without human intervention.
In preliminary evaluations, BreachSeek successfully exploited vulnerabilities in exploitable machines within local networks.
Future developments aim to expand its capabilities, positioning it as an indispensable tool for cybersecurity professionals.
arXiv Detail & Related papers (2024-08-31T19:15:38Z) - WebAssembly and Security: a review [0.8962460460173961]
We analyze 121 papers by identifying seven different security categories.
We aim to fill this gap by proposing a comprehensive review of research works dealing with security in WebAssembly.
arXiv Detail & Related papers (2024-07-17T03:37:28Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Realistic simulation of users for IT systems in cyber ranges [63.20765930558542]
We instrument each machine by means of an external agent to generate user activity.
This agent combines both deterministic and deep learning based methods to adapt to different environment.
We also propose conditional text generation models to facilitate the creation of conversations and documents.
arXiv Detail & Related papers (2021-11-23T10:53:29Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities [47.748732208602355]
Autosploit is an automated framework for evaluating the exploitability of vulnerabilities.
It automatically tests the exploits on different configurations of the environment.
It is able to identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
arXiv Detail & Related papers (2020-06-30T18:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.