Radial Spike and Slab Bayesian Neural Networks for Sparse Data in
Ransomware Attacks
- URL: http://arxiv.org/abs/2205.14759v1
- Date: Sun, 29 May 2022 20:18:14 GMT
- Title: Radial Spike and Slab Bayesian Neural Networks for Sparse Data in
Ransomware Attacks
- Authors: Jurijs Nazarovs, Jack W. Stokes, Melissa Turcotte, Justin Carroll,
Itai Grady
- Abstract summary: We propose a new type of Bayesian Neural network that includes a new form of the approximate posterior distribution.
We demonstrate the performance of our model on a real dataset of ransomware attacks and show improvement over a large number of baselines.
In addition, we propose to represent low-level events as MITRE ATT&CK tactics, techniques, and procedures (TTPs) which allows the model to better generalize to unseen ransomware attacks.
- Score: 7.599718568619666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ransomware attacks are increasing at an alarming rate, leading to large
financial losses, unrecoverable encrypted data, data leakage, and privacy
concerns. The prompt detection of ransomware attacks is required to minimize
further damage, particularly during the encryption stage. However, the
frequency and structure of the observed ransomware attack data makes this task
difficult to accomplish in practice. The data corresponding to ransomware
attacks represents temporal, high-dimensional sparse signals, with limited
records and very imbalanced classes. While traditional deep learning models
have been able to achieve state-of-the-art results in a wide variety of
domains, Bayesian Neural Networks, which are a class of probabilistic models,
are better suited to the issues of the ransomware data. These models combine
ideas from Bayesian statistics with the rich expressive power of neural
networks. In this paper, we propose the Radial Spike and Slab Bayesian Neural
Network, which is a new type of Bayesian Neural network that includes a new
form of the approximate posterior distribution. The model scales well to large
architectures and recovers the sparse structure of target functions. We provide
a theoretical justification for using this type of distribution, as well as a
computationally efficient method to perform variational inference. We
demonstrate the performance of our model on a real dataset of ransomware
attacks and show improvement over a large number of baselines, including
state-of-the-art models such as Neural ODEs (ordinary differential equations).
In addition, we propose to represent low-level events as MITRE ATT\&CK tactics,
techniques, and procedures (TTPs) which allows the model to better generalize
to unseen ransomware attacks.
Related papers
- Ransomware Detection and Classification Using Random Forest: A Case Study with the UGRansome2024 Dataset [0.0]
We introduce UGRansome2024, an optimised dataset for ransomware detection in network traffic.
This dataset is derived from the UGRansome data using an intuitionistic feature engineering approach.
The study presents an analysis of ransomware detection using the UGRansome2024 dataset and the Random Forest algorithm.
arXiv Detail & Related papers (2024-04-19T12:50:03Z) - Advancing DDoS Attack Detection: A Synergistic Approach Using Deep
Residual Neural Networks and Synthetic Oversampling [2.988269372716689]
We introduce an enhanced approach for DDoS attack detection by leveraging the capabilities of Deep Residual Neural Networks (ResNets)
We balance the representation of benign and malicious data points, enabling the model to better discern intricate patterns indicative of an attack.
Experimental results on a real-world dataset demonstrate that our approach achieves an accuracy of 99.98%, significantly outperforming traditional methods.
arXiv Detail & Related papers (2024-01-06T03:03:52Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Transpose Attack: Stealing Datasets with Bidirectional Training [4.166238443183223]
We show that adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.
We propose a novel approach for detecting infected models.
arXiv Detail & Related papers (2023-11-13T15:14:50Z) - Zero Day Threat Detection Using Graph and Flow Based Security Telemetry [3.3029515721630855]
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure.
In this paper, we introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time.
arXiv Detail & Related papers (2022-05-04T19:30:48Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
Deep Model Inspection [26.593268413299228]
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data.
DeepSight is a novel model filtering approach for mitigating backdoor attacks.
We show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.
arXiv Detail & Related papers (2022-01-03T17:10:07Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.