Development of a Multi-purpose Fuzzer to Perform Assessment as Input to
a Cybersecurity Risk Assessment and Analysis System
- URL: http://arxiv.org/abs/2306.04284v1
- Date: Wed, 7 Jun 2023 09:38:31 GMT
- Title: Development of a Multi-purpose Fuzzer to Perform Assessment as Input to
a Cybersecurity Risk Assessment and Analysis System
- Authors: Jack Hance, Jeremy Straub
- Abstract summary: This paper describes and assesses the performance of the proposed fuzzer technology.
It also details how the fuzzer operates as part of the broader cybersecurity risk assessment and analysis system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fuzzing is utilized for testing software and systems for cybersecurity risk
via the automated adaptation of inputs. It facilitates the identification of
software bugs and misconfigurations that may create vulnerabilities, cause
abnormal operations or result in systems' failure. While many fuzzers have been
purpose-developed for testing specific systems, this paper proposes a
generalized fuzzer that provides a specific capability for testing software and
cyber-physical systems which utilize configuration files. While this fuzzer
facilitates the detection of system and software defects and vulnerabilities,
it also facilitates the determination of the impact of settings on device
operations. This later capability facilitates the modeling of the devices in a
cybersecurity risk assessment and analysis system. This paper describes and
assesses the performance of the proposed fuzzer technology. It also details how
the fuzzer operates as part of the broader cybersecurity risk assessment and
analysis system.
Related papers
- Divide and Conquer based Symbolic Vulnerability Detection [0.16385815610837165]
This paper presents a vulnerability detection approach based on symbolic execution and control flow graph analysis.
Our approach employs a divide-and-conquer algorithm to eliminate irrelevant program information.
arXiv Detail & Related papers (2024-09-20T13:09:07Z) - Technical Upgrades to and Enhancements of a System Vulnerability Analysis Tool Based on the Blackboard Architecture [0.0]
Generalization logic building on the Blackboard Architecture's rule-fact paradigm was implemented in this system.
The paper concludes with a discussion of avenues of future work, including the implementation of multithreading.
arXiv Detail & Related papers (2024-09-17T05:06:42Z) - The Impact of SBOM Generators on Vulnerability Assessment in Python: A Comparison and a Novel Approach [56.4040698609393]
Software Bill of Materials (SBOM) has been promoted as a tool to increase transparency and verifiability in software composition.
Current SBOM generation tools often suffer from inaccuracies in identifying components and dependencies.
We propose PIP-sbom, a novel pip-inspired solution that addresses their shortcomings.
arXiv Detail & Related papers (2024-09-10T10:12:37Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - A Framework for Evaluating the Cybersecurity Risk of Real World, Machine
Learning Production Systems [41.470634460215564]
We develop an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems.
Using the proposed extension, security practitioners can apply attack graph analysis methods in environments that include ML components.
arXiv Detail & Related papers (2021-07-05T05:58:11Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.