Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing
Website Classifiers
- URL: http://arxiv.org/abs/2004.06954v1
- Date: Wed, 15 Apr 2020 09:04:16 GMT
- Title: Advanced Evasion Attacks and Mitigations on Practical ML-Based Phishing
Website Classifiers
- Authors: Yusi Lei, Sen Chen, Lingling Fan, Fu Song, and Yang Liu
- Abstract summary: We show that evasion attacks can be launched on ML-based anti-phishing classifiers even in the grey-, and black-box scenarios.
We propose three mutation-based attacks, differing in the knowledge of the target classifier, addressing a key technical challenge.
We demonstrate the effectiveness and efficiency of our evasion attacks on the state-of-the-art, Google's phishing page filter, achieved 100% attack success rate in less than one second per website.
- Score: 12.760638960844249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) based approaches have been the mainstream solution for
anti-phishing detection. When they are deployed on the client-side, ML-based
classifiers are vulnerable to evasion attacks. However, such potential threats
have received relatively little attention because existing attacks destruct the
functionalities or appearance of webpages and are conducted in the white-box
scenario, making it less practical. Consequently, it becomes imperative to
understand whether it is possible to launch evasion attacks with limited
knowledge of the classifier, while preserving the functionalities and
appearance.
In this work, we show that even in the grey-, and black-box scenarios,
evasion attacks are not only effective on practical ML-based classifiers, but
can also be efficiently launched without destructing the functionalities and
appearance. For this purpose, we propose three mutation-based attacks,
differing in the knowledge of the target classifier, addressing a key technical
challenge: automatically crafting an adversarial sample from a known phishing
website in a way that can mislead classifiers. To launch attacks in the white-
and grey-box scenarios, we also propose a sample-based collision attack to gain
the knowledge of the target classifier. We demonstrate the effectiveness and
efficiency of our evasion attacks on the state-of-the-art, Google's phishing
page filter, achieved 100% attack success rate in less than one second per
website. Moreover, the transferability attack on BitDefender's industrial
phishing page classifier, TrafficLight, achieved up to 81.25% attack success
rate. We further propose a similarity-based method to mitigate such evasion
attacks, Pelican. We demonstrate that Pelican can effectively detect evasion
attacks. Our findings contribute to design more robust phishing website
classifiers in practice.
Related papers
- Mitigating Label Flipping Attacks in Malicious URL Detectors Using
Ensemble Trees [16.16333915007336]
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking.
backdoor attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa.
We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels.
arXiv Detail & Related papers (2024-03-05T14:21:57Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Multi-SpacePhish: Extending the Evasion-space of Adversarial Attacks
against Phishing Website Detectors using Machine Learning [22.304132275659924]
This paper formalizes the "evasion-space" in which an adversarial perturbation can be introduced to fool a ML-PWD.
We then propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers.
arXiv Detail & Related papers (2022-10-24T23:45:09Z) - Towards Lightweight Black-Box Attacks against Deep Neural Networks [70.9865892636123]
We argue that black-box attacks can pose practical attacks where only several test samples are available.
As only a few samples are required, we refer to these attacks as lightweight black-box attacks.
We propose Error TransFormer (ETF) for lightweight attacks to mitigate the approximation error.
arXiv Detail & Related papers (2022-09-29T14:43:03Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Feature Importance Guided Attack: A Model Agnostic Adversarial Attack [0.0]
We present the 'Feature Importance Guided Attack' (FIGA) which generates adversarial evasion samples.
We demonstrate FIGA against eight phishing detection models.
We are able to cause a reduction in the F1-score of a phishing detection model from 0.96 to 0.41 on average.
arXiv Detail & Related papers (2021-06-28T15:46:22Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Feature Selection against Evasion Attacks [17.98312950660093]
We propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks.
We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples.
arXiv Detail & Related papers (2020-05-25T15:05:51Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.