Efficient Query-Based Attack against ML-Based Android Malware Detection
under Zero Knowledge Setting
- URL: http://arxiv.org/abs/2309.01866v2
- Date: Wed, 6 Sep 2023 11:50:03 GMT
- Title: Efficient Query-Based Attack against ML-Based Android Malware Detection
under Zero Knowledge Setting
- Authors: Ping He, Yifan Xia, Xuhong Zhang, Shouling Ji
- Abstract summary: We introduce AdvDroidZero, an efficient query-based attack framework against ML-based AMD methods that operates under the zero knowledge setting.
Our evaluation shows that AdvDroidZero is effective against various mainstream ML-based AMD methods, in particular, state-of-the-art such methods and real-world antivirus solutions.
- Score: 39.79359457491294
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The widespread adoption of the Android operating system has made malicious
Android applications an appealing target for attackers. Machine learning-based
(ML-based) Android malware detection (AMD) methods are crucial in addressing
this problem; however, their vulnerability to adversarial examples raises
concerns. Current attacks against ML-based AMD methods demonstrate remarkable
performance but rely on strong assumptions that may not be realistic in
real-world scenarios, e.g., the knowledge requirements about feature space,
model parameters, and training dataset. To address this limitation, we
introduce AdvDroidZero, an efficient query-based attack framework against
ML-based AMD methods that operates under the zero knowledge setting. Our
extensive evaluation shows that AdvDroidZero is effective against various
mainstream ML-based AMD methods, in particular, state-of-the-art such methods
and real-world antivirus solutions.
Related papers
- MM-PoisonRAG: Disrupting Multimodal RAG with Local and Global Poisoning Attacks [109.53357276796655]
Multimodal large language models (MLLMs) equipped with Retrieval Augmented Generation (RAG)
RAG enhances MLLMs by grounding responses in query-relevant external knowledge.
This reliance poses a critical yet underexplored safety risk: knowledge poisoning attacks.
We propose MM-PoisonRAG, a novel knowledge poisoning attack framework with two attack strategies.
arXiv Detail & Related papers (2025-02-25T04:23:59Z) - LAMD: Context-driven Android Malware Detection and Classification with LLMs [8.582859303611881]
Large Language Models (LLMs) offer a promising alternative with their zero-shot inference and reasoning capabilities.
We propose LAMD, a practical context-driven framework to enable LLM-based Android malware detection.
arXiv Detail & Related papers (2025-02-18T17:01:37Z) - Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems [38.581428446989015]
Adverse Android malware attacks compromise the detection integrity of machine learning-based systems.
We propose ADD, a practical adversarial Android malware defense framework.
ADD is effective against state-of-the-art problem space adversarial Android malware attacks.
arXiv Detail & Related papers (2025-01-23T15:59:01Z) - Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - Improving Adversarial Robustness in Android Malware Detection by Reducing the Impact of Spurious Correlations [3.7937308360299116]
Machine learning (ML) has demonstrated significant advancements in Android malware detection (AMD)
However, the resilience of ML against realistic evasion attacks remains a major obstacle for AMD.
In this study, we propose a domain adaptation technique to improve the generalizability of AMD by aligning the distribution of malware samples and AEs.
arXiv Detail & Related papers (2024-08-27T17:01:12Z) - Unraveling the Key of Machine Learning Solutions for Android Malware
Detection [33.63795751798441]
This paper presents a comprehensive investigation into machine learning-based Android malware detection.
We first survey the literature, categorizing contributions into a taxonomy based on the Android feature engineering and ML modeling pipeline.
Then, we design a general-propose framework for ML-based Android malware detection, re-implement 12 representative approaches from different research communities, and evaluate them from three primary dimensions, i.e. effectiveness, robustness, and efficiency.
arXiv Detail & Related papers (2024-02-05T12:31:19Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Mask Off: Analytic-based Malware Detection By Transfer Learning and
Model Personalization [7.717537870226507]
Analytic-based deep neural network, Android Malware detection (ADAM)
This paper presents an Analytic-based deep neural network, Android Malware detection (ADAM)
arXiv Detail & Related papers (2022-11-20T01:52:15Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box
Android Malware Detection [2.2811510666857546]
EvadeDroid is a problem-space adversarial attack designed to effectively evade black-box Android malware detectors in real-world scenarios.
We show that EvadeDroid achieves evasion rates of 80%-95% against DREBIN, Sec-SVM, ADE-MA, MaMaDroid, and Opcode-SVM with only 1-9 queries.
arXiv Detail & Related papers (2021-10-07T09:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.