Robust Attack Graph Generation
- URL: http://arxiv.org/abs/2206.07776v1
- Date: Wed, 15 Jun 2022 19:26:39 GMT
- Title: Robust Attack Graph Generation
- Authors: Dennis Mouwen, Sicco Verwer, Azqa Nadeem
- Abstract summary: We present a method to learn automaton models that are more robust to input modifications.
It iteratively aligns sequences to a learned model, modifies the sequences to their aligned versions, and re-learns the model.
- Score: 11.419463747286716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method to learn automaton models that are more robust to input
modifications. It iteratively aligns sequences to a learned model, modifies the
sequences to their aligned versions, and re-learns the model. Automaton
learning algorithms are typically very good at modeling the frequent behavior
of a software system. Our solution can be used to also learn the behavior
present in infrequent sequences, as these will be aligned to the frequent ones
represented by the model. We apply our method to the SAGE tool for modeling
attacker behavior from intrusion alerts. In experiments, we demonstrate that
our algorithm learns models that can handle noise such as added and removed
symbols from sequences. Furthermore, it learns more concise models that fit
better to the training data.
Related papers
- Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - LLMs can learn self-restraint through iterative self-reflection [57.26854891567574]
Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics.
This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach.
We devise a utility function that can encourage the model to produce responses only when it is confident in them.
arXiv Detail & Related papers (2024-05-15T13:35:43Z) - CodeArt: Better Code Models by Attention Regularization When Symbols Are
Lacking [12.458135956476639]
Transformer based code models have impressive performance in many software engineering tasks.
However, their effectiveness degrades when symbols are missing or not informative.
We propose a new method to pre-train general code models when symbols are lacking.
arXiv Detail & Related papers (2024-02-19T05:13:22Z) - SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling with Backtracking [60.109453252858806]
A maximum-likelihood (MLE) objective does not match a downstream use-case of autoregressively generating high-quality sequences.
We formulate sequence generation as an imitation learning (IL) problem.
This allows us to minimize a variety of divergences between the distribution of sequences generated by an autoregressive model and sequences from a dataset.
Our resulting method, SequenceMatch, can be implemented without adversarial training or architectural changes.
arXiv Detail & Related papers (2023-06-08T17:59:58Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Online Dynamics Learning for Predictive Control with an Application to
Aerial Robots [3.673994921516517]
Even though prediction models can be learned and applied to model-based controllers, these models are often learned offline.
In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure.
We propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment.
arXiv Detail & Related papers (2022-07-19T15:51:25Z) - Debugging using Orthogonal Gradient Descent [7.766921168069532]
Given a trained model that is partially faulty, can we correct its behaviour without having to train the model from scratch?
In other words, can we " neural networks similar to how we address bugs in our mathematical models and standard computer code?
arXiv Detail & Related papers (2022-06-17T00:03:54Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.