MalLoc: Toward Fine-grained Android Malicious Payload Localization via LLMs
- URL: http://arxiv.org/abs/2508.17856v1
- Date: Mon, 25 Aug 2025 10:05:44 GMT
- Title: MalLoc: Toward Fine-grained Android Malicious Payload Localization via LLMs
- Authors: Tiezhu Sun, Marco Alecci, Aleksandr Pilgun, Yewei Song, Xunzhu Tang, Jordan Samhi, Tegawendé F. Bissyandé, Jacques Klein,
- Abstract summary: MalLoc is a novel approach to localize malicious payloads at a fine-grained level within Android malware.<n>This work advances beyond traditional detection and classification by enabling deeper insights into behavior-level malicious logic.
- Score: 44.97660453235412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid evolution of Android malware poses significant challenges to the maintenance and security of mobile applications (apps). Traditional detection techniques often struggle to keep pace with emerging malware variants that employ advanced tactics such as code obfuscation and dynamic behavior triggering. One major limitation of these approaches is their inability to localize malicious payloads at a fine-grained level, hindering precise understanding of malicious behavior. This gap in understanding makes the design of effective and targeted mitigation strategies difficult, leaving mobile apps vulnerable to continuously evolving threats. To address this gap, we propose MalLoc, a novel approach that leverages the code understanding capabilities of large language models (LLMs) to localize malicious payloads at a fine-grained level within Android malware. Our experimental results demonstrate the feasibility and effectiveness of using LLMs for this task, highlighting the potential of MalLoc to enhance precision and interpretability in malware analysis. This work advances beyond traditional detection and classification by enabling deeper insights into behavior-level malicious logic and opens new directions for research, including dynamic modeling of localized threats and targeted countermeasure development.
Related papers
- LLM-Driven Feature-Level Adversarial Attacks on Android Malware Detectors [3.4703956485057152]
LAMLAD is a novel adversarial attack framework for Android malware detection.<n>It exploits the generative and reasoning capabilities of large language models.<n>LAMLAD achieves an attack success rate (ASR) of up to 97%, requiring on average only three attempts per adversarial sample.
arXiv Detail & Related papers (2025-12-24T19:56:06Z) - LLMs Caught in the Crossfire: Malware Requests and Jailbreak Challenges [70.85114705489222]
We propose MalwareBench, a benchmark dataset containing 3,520 jailbreaking prompts for malicious code-generation.<n>M MalwareBench is based on 320 manually crafted malicious code generation requirements, covering 11 jailbreak methods and 29 code functionality categories.<n>Experiments show that mainstream LLMs exhibit limited ability to reject malicious code-generation requirements, and the combination of multiple jailbreak methods further reduces the model's security capabilities.
arXiv Detail & Related papers (2025-06-09T12:02:39Z) - Explainable Android Malware Detection and Malicious Code Localization Using Graph Attention [1.2277343096128712]
XAIDroid is a novel approach to automatically locating malicious code snippets within malware.<n>By representing code as API call graphs, XAIDroid captures semantic context and enhances resilience against obfuscation.<n> Evaluation on synthetic and real-world malware datasets demonstrates the efficacy of our approach, achieving high recall and F1-score rates for malicious code localization.
arXiv Detail & Related papers (2025-03-10T09:33:37Z) - LAMD: Context-driven Android Malware Detection and Classification with LLMs [8.582859303611881]
Large Language Models (LLMs) offer a promising alternative with their zero-shot inference and reasoning capabilities.<n>We propose LAMD, a practical context-driven framework to enable LLM-based Android malware detection.
arXiv Detail & Related papers (2025-02-18T17:01:37Z) - LLM-Virus: Evolutionary Jailbreak Attack on Large Language Models [59.29840790102413]
Existing jailbreak attacks are primarily based on opaque optimization techniques and gradient search methods.<n>We propose LLM-Virus, a jailbreak attack method based on evolutionary algorithm, termed evolutionary jailbreak.<n>Our results show that LLM-Virus achieves competitive or even superior performance compared to existing attack methods.
arXiv Detail & Related papers (2024-12-28T07:48:57Z) - Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.<n>We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.<n>We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - MalModel: Hiding Malicious Payload in Mobile Deep Learning Models with Black-box Backdoor Attack [24.569156952823068]
We propose a method to generate or transform mobile malware by hiding the malicious payloads inside the parameters of deep learning models.
We can run malware in DL mobile applications covertly with little impact on the model performance.
arXiv Detail & Related papers (2024-01-05T06:35:24Z) - MalPurifier: Enhancing Android Malware Detection with Adversarial Purification against Evasion Attacks [18.016148305499865]
MalPurifier is a novel adversarial purification framework specifically engineered for Android malware detection.<n>Experiments on two large-scale datasets demonstrate that MalPurifier significantly outperforms state-of-the-art defenses.<n>As a lightweight, model-agnostic, and plug-and-play module, MalPurifier offers a practical and effective solution to bolster the security of ML-based Android malware detectors.
arXiv Detail & Related papers (2023-12-11T14:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.