Mask Off: Analytic-based Malware Detection By Transfer Learning and
Model Personalization
- URL: http://arxiv.org/abs/2211.10843v1
- Date: Sun, 20 Nov 2022 01:52:15 GMT
- Title: Mask Off: Analytic-based Malware Detection By Transfer Learning and
Model Personalization
- Authors: Amirmohammad Pasdar and Young Choon Lee and Seok-Hee Hong
- Abstract summary: Analytic-based deep neural network, Android Malware detection (ADAM)
This paper presents an Analytic-based deep neural network, Android Malware detection (ADAM)
- Score: 7.717537870226507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vulnerability of smartphones to cyberattacks has been a severe concern to
users arising from the integrity of installed applications (\textit{apps}).
Although applications are to provide legitimate and diversified on-the-go
services, harmful and dangerous ones have also uncovered the feasible way to
penetrate smartphones for malicious behaviors. Thorough application analysis is
key to revealing malicious intent and providing more insights into the
application behavior for security risk assessments. Such in-depth analysis
motivates employing deep neural networks (DNNs) for a set of features and
patterns extracted from applications to facilitate detecting potentially
dangerous applications independently. This paper presents an Analytic-based
deep neural network, Android Malware detection (ADAM), that employs a
fine-grained set of features to train feature-specific DNNs to have consensus
on the application labels when their ground truth is unknown. In addition, ADAM
leverages the transfer learning technique to obtain its adjustability to new
applications across smartphones for recycling the pre-trained model(s) and
making them more adaptable by model personalization and federated learning
techniques. This adjustability is also assisted by federated learning guards,
which protect ADAM against poisoning attacks through model analysis. ADAM
relies on a diverse dataset containing more than 153000 applications with over
41000 extracted features for DNNs training. The ADAM's feature-specific DNNs,
on average, achieved more than 98% accuracy, resulting in an outstanding
performance against data manipulation attacks.
Related papers
- BDefects4NN: A Backdoor Defect Database for Controlled Localization Studies in Neural Networks [65.666913051617]
We introduce BDefects4NN, the first backdoor defect database for localization studies.
BDefects4NN provides labeled backdoor-defected DNNs at the neuron granularity and enables controlled localization studies of defect root causes.
We conduct experiments on evaluating six fault localization criteria and two defect repair techniques, which show limited effectiveness for backdoor defects.
arXiv Detail & Related papers (2024-12-01T09:52:48Z) - Visually Analyze SHAP Plots to Diagnose Misclassifications in ML-based Intrusion Detection [0.3199881502576702]
Intrusion detection system (IDS) can essentially mitigate threats by providing alerts.
In order to detect these threats various machine learning (ML) and deep learning (DL) models have been proposed.
In this paper, we propose an explainable artificial intelligence (XAI) based visual analysis approach using overlapping SHAP plots.
arXiv Detail & Related papers (2024-11-04T23:08:34Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs) [0.09208007322096533]
We present HuntGPT, a specialized intrusion detection dashboard applying a Random Forest classifier.
The paper delves into the system's architecture, components, and technical accuracy, assessed through Certified Information Security Manager (CISM) Practice Exams.
The results demonstrate that conversational agents, supported by LLM and integrated with XAI, provide robust, explainable, and actionable AI solutions in intrusion detection.
arXiv Detail & Related papers (2023-09-27T20:58:13Z) - Efficient Query-Based Attack against ML-Based Android Malware Detection
under Zero Knowledge Setting [39.79359457491294]
We introduce AdvDroidZero, an efficient query-based attack framework against ML-based AMD methods that operates under the zero knowledge setting.
Our evaluation shows that AdvDroidZero is effective against various mainstream ML-based AMD methods, in particular, state-of-the-art such methods and real-world antivirus solutions.
arXiv Detail & Related papers (2023-09-05T00:14:12Z) - Backdoor Attack Detection in Computer Vision by Applying Matrix
Factorization on the Weights of Deep Networks [6.44397009982949]
We introduce a novel method for backdoor detection that extracts features from pre-trained DNN's weights.
In comparison to other detection techniques, this has a number of benefits, such as not requiring any training data.
Our method outperforms the competing algorithms in terms of efficiency and is more accurate, helping to ensure the safe application of deep learning and AI.
arXiv Detail & Related papers (2022-12-15T20:20:18Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks [37.23982660941893]
The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
arXiv Detail & Related papers (2022-01-16T17:01:38Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - DeepPayload: Black-box Backdoor Attack on Deep Learning Models through
Neural Payload Injection [17.136757440204722]
We introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models.
The injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease.
We found 54 apps that were vulnerable to our attack, including popular and security-critical ones.
arXiv Detail & Related papers (2021-01-18T06:29:30Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.