SeqNet: An Efficient Neural Network for Automatic Malware Detection
- URL: http://arxiv.org/abs/2205.03850v1
- Date: Sun, 8 May 2022 12:31:35 GMT
- Title: SeqNet: An Efficient Neural Network for Automatic Malware Detection
- Authors: Jiawei Xu and Wenxuan Fu and Haoyu Bu and Zhi Wang and Lingyun Ying
- Abstract summary: We propose a lightweight malware detection model called SeqNet which could be trained at high speed with low memory required on the raw binaries.
By avoiding contextual confusion and reducing semantic loss, SeqNet maintains the detection accuracy when reducing the number of parameters to only 136K.
- Score: 5.365259648024797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Malware continues to evolve rapidly, and more than 450,000 new samples are
captured every day, which makes manual malware analysis impractical. However,
existing deep learning detection models need manual feature engineering or
require high computational overhead for long training processes, which might be
laborious to select feature space and difficult to retrain for mitigating model
aging. Therefore, a crucial requirement for a detector is to realize automatic
and efficient detection. In this paper, we propose a lightweight malware
detection model called SeqNet which could be trained at high speed with low
memory required on the raw binaries. By avoiding contextual confusion and
reducing semantic loss, SeqNet maintains the detection accuracy when reducing
the number of parameters to only 136K. We demonstrate the effectiveness of our
methods and the low training cost requirement of SeqNet in our experiments.
Besides, we make our datasets and codes public to stimulate further academic
research.
Related papers
- A Fresh Take on Stale Embeddings: Improving Dense Retriever Training with Corrector Networks [81.2624272756733]
In dense retrieval, deep encoders provide embeddings for both inputs and targets.
We train a small parametric corrector network that adjusts stale cached target embeddings.
Our approach matches state-of-the-art results even when no target embedding updates are made during training.
arXiv Detail & Related papers (2024-09-03T13:29:13Z) - Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Model2Detector:Widening the Information Bottleneck for
Out-of-Distribution Detection using a Handful of Gradient Steps [12.263417500077383]
Out-of-distribution detection is an important capability that has long eluded vanilla neural networks.
Recent advances in inference-time out-of-distribution detection help mitigate some of these problems.
We show how our method consistently outperforms the state-of-the-art in detection accuracy on popular image datasets.
arXiv Detail & Related papers (2022-02-22T23:03:40Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - A comparative study of neural network techniques for automatic software
vulnerability detection [9.443081849443184]
Most commonly used method for detecting software vulnerabilities is static analysis.
Some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve intelligence of detection.
We have conducted extensive experiments to test the performance of the two most typical neural networks.
arXiv Detail & Related papers (2021-04-29T01:47:30Z) - Improving Botnet Detection with Recurrent Neural Network and Transfer
Learning [5.602292536933117]
Botnet detection is a critical step in stopping the spread of botnets and preventing malicious activities.
Recent approaches employing machine learning (ML) showed improved performance than earlier ones.
We propose a novel botnet detection method, built upon Recurrent Variational Autoencoder (RVAE)
arXiv Detail & Related papers (2021-04-26T14:05:01Z) - Edge-Detect: Edge-centric Network Intrusion Detection using Deep Neural
Network [0.0]
Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints.
We develop a novel light, fast and accurate 'Edge-Detect' model, which detects Denial of Service attack on edge nodes using DLM techniques.
arXiv Detail & Related papers (2021-02-03T04:24:34Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.