MalBERT: Using Transformers for Cybersecurity and Malicious Software
Detection
- URL: http://arxiv.org/abs/2103.03806v1
- Date: Fri, 5 Mar 2021 17:09:46 GMT
- Title: MalBERT: Using Transformers for Cybersecurity and Malicious Software
Detection
- Authors: Abir Rahali and Moulay A. Akhloufi
- Abstract summary: Transformers, a category of attention-based deep learning techniques, have recently shown impressive results in solving different tasks.
We propose a model based on BERT (Bi Representations from Transformers) which performs a static analysis on the source code of Android applications.
The obtained results are promising and show the high performance obtained by Transformer-based models for malicious software detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years we have witnessed an increase in cyber threats and malicious
software attacks on different platforms with important consequences to persons
and businesses. It has become critical to find automated machine learning
techniques to proactively defend against malware. Transformers, a category of
attention-based deep learning techniques, have recently shown impressive
results in solving different tasks mainly related to the field of Natural
Language Processing (NLP). In this paper, we propose the use of a Transformers'
architecture to automatically detect malicious software. We propose a model
based on BERT (Bidirectional Encoder Representations from Transformers) which
performs a static analysis on the source code of Android applications using
preprocessed features to characterize existing malware and classify it into
different representative malware categories. The obtained results are promising
and show the high performance obtained by Transformer-based models for
malicious software detection.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Zero Day Ransomware Detection with Pulse: Function Classification with Transformer Models and Assembly Language [1.870031206586792]
Peekaboo, a Dynamic Binary Instrumentation tool defeats evasive malware to capture its genuine behavior.
We propose Pulse, a novel framework for zero day ransomware detection with Transformer models and Assembly language.
arXiv Detail & Related papers (2024-08-15T00:22:32Z) - Body Transformer: Leveraging Robot Embodiment for Policy Learning [51.531793239586165]
Body Transformer (BoT) is an architecture that leverages the robot embodiment by providing an inductive bias that guides the learning process.
We represent the robot body as a graph of sensors and actuators, and rely on masked attention to pool information throughout the architecture.
The resulting architecture outperforms the vanilla transformer, as well as the classical multilayer perceptron, in terms of task completion, scaling properties, and computational efficiency.
arXiv Detail & Related papers (2024-08-12T17:31:28Z) - A Lean Transformer Model for Dynamic Malware Analysis and Detection [0.0]
Malware is a fast-growing threat to the modern computing world and existing lines of defense are not efficient enough to address this issue.
Previous works have shown some success leveraging Neural Networks and API calls sequences extracted from execution reports.
In this paper, we design an emulation-Only model, based on the Transformers architecture, to detect malicious files.
arXiv Detail & Related papers (2024-08-05T08:46:46Z) - Detecting Android Malware: From Neural Embeddings to Hands-On Validation with BERTroid [0.38233569758620056]
We present BERTroid, an innovative malware detection model built on the BERT architecture.
BERTroid emerged as a promising solution for combating Android malware.
Our approach has demonstrated promising resilience against the rapid evolution of malware on Android systems.
arXiv Detail & Related papers (2024-05-06T16:35:56Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Learning Transformer Programs [78.9509560355733]
We introduce a procedure for training Transformers that are mechanistically interpretable by design.
Instead of compiling human-written programs into Transformers, we design a modified Transformer that can be trained using gradient-based optimization.
The Transformer Programs can automatically find reasonable solutions, performing on par with standard Transformers of comparable size.
arXiv Detail & Related papers (2023-06-01T20:27:01Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - An Ensemble of Pre-trained Transformer Models For Imbalanced Multiclass
Malware Classification [0.0]
API call sequences made by malware are widely utilized features by machine and deep learning models for malware classification.
Traditional machine and deep learning models remain incapable of capturing sequence relationships between API calls.
Our experiments demonstrate that the transformer model with one transformer block layer surpassed the widely used base architecture, LSTM.
BERT or CANINE, pre-trained transformer models, outperformed in classifying highly imbalanced malware families according to evaluation metrics, F1-score, and AUC score.
arXiv Detail & Related papers (2021-12-25T13:40:07Z) - Towards an Automated Pipeline for Detecting and Classifying Malware
through Machine Learning [0.0]
We propose a malware taxonomic classification pipeline able to classify Windows Portable Executable files (PEs)
Given an input PE sample, it is first classified as either malicious or benign.
If malicious, the pipeline further analyzes it in order to establish its threat type, family, and behavior(s)
arXiv Detail & Related papers (2021-06-10T10:07:50Z) - Transformers in Vision: A Survey [101.07348618962111]
Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence.
Transformers require minimal inductive biases for their design and are naturally suited as set-functions.
This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline.
arXiv Detail & Related papers (2021-01-04T18:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.