Arms Race in Adversarial Malware Detection: A Survey
- URL: http://arxiv.org/abs/2005.11671v3
- Date: Tue, 31 Aug 2021 14:45:18 GMT
- Title: Arms Race in Adversarial Malware Detection: A Survey
- Authors: Deqiang Li, Qianmu Li, Yanfang Ye and Shouhuai Xu
- Abstract summary: Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques.
ML is vulnerable to attacks known as adversarial examples.
Knowing the defender's feature set is critical to the success of transfer attacks.
The effectiveness of adversarial training depends on the defender's capability in identifying the most powerful attack.
- Score: 33.8941961394801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malicious software (malware) is a major cyber threat that has to be tackled
with Machine Learning (ML) techniques because millions of new malware examples
are injected into cyberspace on a daily basis. However, ML is vulnerable to
attacks known as adversarial examples. In this paper, we survey and systematize
the field of Adversarial Malware Detection (AMD) through the lens of a unified
conceptual framework of assumptions, attacks, defenses, and security
properties. This not only leads us to map attacks and defenses to partial order
structures, but also allows us to clearly describe the attack-defense arms race
in the AMD context. We draw a number of insights, including: knowing the
defender's feature set is critical to the success of transfer attacks; the
effectiveness of practical evasion attacks largely depends on the attacker's
freedom in conducting manipulations in the problem space; knowing the
attacker's manipulation set is critical to the defender's success; the
effectiveness of adversarial training depends on the defender's capability in
identifying the most powerful attack. We also discuss a number of future
research directions.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - MalProtect: Stateful Defense Against Adversarial Query Attacks in
ML-based Malware Detection [0.0]
MalProtect is a stateful defense against query attacks in the malware detection domain.
Our results show that it reduces the evasion rate of adversarial query attacks by 80+% in Android and Windows malware.
arXiv Detail & Related papers (2023-02-21T15:40:19Z) - Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection [0.0]
Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
arXiv Detail & Related papers (2023-02-01T16:03:34Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z) - A Framework for Enhancing Deep Neural Networks Against Adversarial
Malware [31.026476033245764]
We propose a defense framework to enhance the robustness of deep neural networks against adversarial malware evasion attacks.
The framework wins the AICS' 2019 challenge by achieving a 76.02% accuracy, where neither the attacker (i.e., the challenge organizer) knows the framework or defense nor we (the defender) know the attacks.
arXiv Detail & Related papers (2020-04-15T07:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.