Towards automation of threat modeling based on a semantic model of
attack patterns and weaknesses
- URL: http://arxiv.org/abs/2112.04231v1
- Date: Wed, 8 Dec 2021 11:13:47 GMT
- Title: Towards automation of threat modeling based on a semantic model of
attack patterns and weaknesses
- Authors: Andrei Brazhuk
- Abstract summary: This work considers challenges of building and usage a formal knowledge base (model)
The proposed model can be used to learn relations between techniques, attack pattern, weaknesses, and vulnerabilities in order to build various threat landscapes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This works considers challenges of building and usage a formal knowledge base
(model), which unites the ATT&CK, CAPEC, CWE, CVE security enumerations. The
proposed model can be used to learn relations between attack techniques, attack
pattern, weaknesses, and vulnerabilities in order to build various threat
landscapes, in particular, for threat modeling. The model is created as an
ontology with freely available datasets in the OWL and RDF formats. The use of
ontologies is an alternative of structural and graph based approaches to
integrate the security enumerations. In this work we consider an approach of
threat modeling with the data components of ATT&CK based on the knowledge base
and an ontology driven threat modeling framework. Also, some evaluations are
made, how it can be possible to use the ontological approach of threat modeling
and which challenges this can be faced.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Cyber Knowledge Completion Using Large Language Models [1.4883782513177093]
Integrating the Internet of Things (IoT) into Cyber-Physical Systems (CPSs) has expanded their cyber-attack surface.
Assessing the risks of CPSs is increasingly difficult due to incomplete and outdated cybersecurity knowledge.
Recent advancements in Large Language Models (LLMs) present a unique opportunity to enhance cyber-attack knowledge completion.
arXiv Detail & Related papers (2024-09-24T15:20:39Z) - Adversarial Robustness of Open-source Text Classification Models and Fine-Tuning Chains [11.379606061113348]
Open-source AI models and fine-tuning chains face new security risks, such as adversarial attacks.
This paper aims to explore the adversarial robustness of open-source AI models and their chains formed by the upstream-downstream relationships via fine-tuning.
arXiv Detail & Related papers (2024-08-06T05:17:17Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - CARACAS: vehiCular ArchitectuRe for detAiled Can Attacks Simulation [37.89720165358964]
This paper showcases CARACAS, a vehicular model, including component control via CAN messages and attack injection capabilities.
CarACAS showcases the efficacy of this methodology, including a Battery Electric Vehicle (BEV) model, and focuses on attacks targeting torque control in two distinct scenarios.
arXiv Detail & Related papers (2024-06-11T10:16:55Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - On the Opportunities and Risks of Foundation Models [256.61956234436553]
We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration.
arXiv Detail & Related papers (2021-08-16T17:50:08Z) - Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization [40.37373934201329]
We investigate and develop model extraction attacks against GNN models.
We first formalise the threat modelling in the context of GNN model extraction.
We then present detailed methods which utilise the accessible knowledge in each threat to implement the attacks.
arXiv Detail & Related papers (2020-10-24T03:09:37Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z) - Systematic Attack Surface Reduction For Deployed Sentiment Analysis
Models [0.0]
This work proposes a structured approach to baselining a model, identifying attack vectors, and securing the machine learning models after deployment.
The BAD architecture is evaluated to quantify the adversarial life cycle for a black box Sentiment Analysis system.
The goal is to demonstrate a viable methodology for securing a machine learning model in a production setting.
arXiv Detail & Related papers (2020-06-19T13:41:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.