Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model
- URL: http://arxiv.org/abs/2409.09175v1
- Date: Fri, 13 Sep 2024 20:21:28 GMT
- Title: Cybersecurity Software Tool Evaluation Using a 'Perfect' Network Model
- Authors: Jeremy Straub,
- Abstract summary: Cybersecurity software tool evaluation is difficult due to the inherently adversarial nature of the field.
This paper proposes the use of a 'perfect' network, representing computing systems, a network and the attack pathways through it as a methodology to use for testing cybersecurity decision-making tools.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cybersecurity software tool evaluation is difficult due to the inherently adversarial nature of the field. A penetration testing (or offensive) tool must be tested against a viable defensive adversary and a defensive tool must, similarly, be tested against a viable offensive adversary. Characterizing the tool's performance inherently depends on the quality of the adversary, which can vary from test to test. This paper proposes the use of a 'perfect' network, representing computing systems, a network and the attack pathways through it as a methodology to use for testing cybersecurity decision-making tools. This facilitates testing by providing a known and consistent standard for comparison. It also allows testing to include researcher-selected levels of error, noise and uncertainty to evaluate cybersecurity tools under these experimental conditions.
Related papers
- The Impact of SBOM Generators on Vulnerability Assessment in Python: A Comparison and a Novel Approach [56.4040698609393]
Software Bill of Materials (SBOM) has been promoted as a tool to increase transparency and verifiability in software composition.
Current SBOM generation tools often suffer from inaccuracies in identifying components and dependencies.
We propose PIP-sbom, a novel pip-inspired solution that addresses their shortcomings.
arXiv Detail & Related papers (2024-09-10T10:12:37Z) - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher [1.6652242654250329]
We develop CIPHER (Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers), a large language model specifically trained to assist in penetration testing tasks.
We trained CIPHER using over 300 high-quality write-ups of vulnerable machines, hacking techniques, and documentation of open-source penetration testing tools.
We introduce the Findings, Action, Reasoning, and Results (FARR) Flow augmentation, a novel method to augment penetration testing write-ups to establish a fully automated pentesting simulation benchmark.
arXiv Detail & Related papers (2024-08-21T14:24:04Z) - An Extensive Comparison of Static Application Security Testing Tools [1.3927943269211593]
Static Application Security Testing Tools (SASTTs) identify software vulnerabilities to support the security and reliability of software applications.
Several studies have suggested that alternative solutions may be more effective than SASTTs due to their tendency to generate false alarms.
Our SASTTs evaluation is based on a controlled, though synthetic, Java.
arXiv Detail & Related papers (2024-03-14T09:37:54Z) - Constrained Adversarial Learning and its applicability to Automated
Software Testing: a systematic review [0.0]
This systematic review is focused on the current state-of-the-art of constrained data generation methods applied for adversarial learning and software testing.
It aims to guide researchers and developers to enhance testing tools with adversarial learning methods and improve the resilience and robustness of their digital systems.
arXiv Detail & Related papers (2023-03-14T00:27:33Z) - Increasing Confidence in Adversarial Robustness Evaluations [53.2174171468716]
We propose a test to identify weak attacks and thus weak defense evaluations.
Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample.
For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it.
arXiv Detail & Related papers (2022-06-28T13:28:13Z) - Can Adversarial Training Be Manipulated By Non-Robust Features? [64.73107315313251]
Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks.
We identify a novel threat model named stability attacks, which aims to hinder robust availability by slightly perturbing the training data.
Under this threat, we find that adversarial training using a conventional defense budget $epsilon$ provably fails to provide test robustness in a simple statistical setting.
arXiv Detail & Related papers (2022-01-31T16:25:25Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Pandora: A Cyber Range Environment for the Safe Testing and Deployment
of Autonomous Cyber Attack Tools [0.0]
Pandora is a safe testing environment which allows security researchers and cyber range users to perform experiments on automated cyber attack tools.
Unlike existing testbeds and cyber ranges which have direct compatibility with enterprise computer systems, our test system is intentionally designed to be incompatible with enterprise real-world computing systems.
arXiv Detail & Related papers (2020-09-24T04:38:47Z) - Search-based Test-Case Generation by Monitoring Responsibility Safety
Rules [2.1270496914042996]
We propose a method for screening and classifying simulation-based driving test data to be used for training and testing controllers.
Our framework is distributed with the publicly available S-TALIRO and Sim-ATAV tools.
arXiv Detail & Related papers (2020-04-25T10:10:11Z) - Testing Robustness Against Unforeseen Adversaries [54.75108356391557]
Adversarial robustness research primarily focuses on L_p perturbations.
In real-world applications developers are unlikely to have access to the full range of attacks or corruptions their system will face.
We introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries.
arXiv Detail & Related papers (2019-08-21T17:36:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.